• finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    6
    ·
    20 hours ago

    Let’s not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn’t in 2020 by OpenAI and also 2023 DeepMind papers they published.

    To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn’t think, it doesn’t emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it’s statistical representation of thinking.

      • AppleTea@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        If modern computers can reproduce sentience, then so can older computers. Thats just how general computing is. You really gonna claim magnetic tape can think? That punch-cards and piston transistors can produce the same phenomenon as tens of billions of living brain cells?

    • can@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      18 hours ago

      But let’s not also pretend people aren’t already falling in love with them. Or thinking they’re god, etc.

      • Duamerthrax@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        18 hours ago

        Some people are ok with lowering their ability to make judgements to convince themselves that LLMs are human like. That’s the other solution to the Turing Test.

    • Gorilladrums@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      15 hours ago

      I think most people understand that these LLM cannot think or reason, they’re just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

        I don’t know if it’s an urban myth, but I’ve heard about 20% of LLM inference time and electricity is being spend on “hello” and “thank you” prompts. :)

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        15 hours ago

        Then you clearly haven’t been paying attention, because just as zealously as you defend it’s nonexistent use cases there are people defending the idea that it operates similar to how a human or animal thinks.

        • Gorilladrums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          14 hours ago

          My point is that those people are a very small minority, and they suffer from issues that go beyond their ignorance of these how these models work.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            I think they’re more common than you realize. I think people ignorance of how these models work is the commonly held stance for the general public.

            • Gorilladrums@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              You’re definitely correct that most people are ignorant on these models work. I think most people understand these models aren’t sentient, but even among those who do, they don’t become emotionally attached to these models. I’m just saying that the people who end up developing feelings for chatbots go beyond ignorance. They have issues that require years of therapy.

        • Genius@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          13 hours ago

          The difference is that the brain is recursive while these models are linear, but the fundamental structure is similar.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            The difference is that a statistical model is not a replacement for an emulation. Their structure is wildly different.

            • Genius@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              12 hours ago

              Bold words coming from a human. You’re just some talking meat programmed through trial-and-error by evolution into approximating real thought. You weren’t designed with intention, you’re just a machine for regurgitating survival strategies. Fuck, eat, shit, that’s your purpose. There’s no intelligence behind the wet sacs you call your eyes.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 hours ago

                How many electricity powered machines processing binary data via crystal prisms did we see evolve organically?

                • Genius@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  11 hours ago

                  Organic intelligence is a myth. You’re a philosophical zombie, imitating intelligence. Evolution does not produce intelligent creatures. You have no sensations, no consciousness, not even knowledge in the proper sense. Just bunches of neurons mindlessly imitating the external appearance of intelligence.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      16 hours ago

      Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        15 hours ago

        Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          14 hours ago

          And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            14 hours ago

            The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.

            They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.

            EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              14 hours ago

              Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.

            • abruptly8951@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              10 hours ago

              Can you go into a bit more details on why you think these papers are such a home run for your point?

              1. Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them

              2. These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply

              3. These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)

              4. These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable

              5. These papers do not consider multimodal systems.

              6. You talked about permeance, does a RAG solution not overcome this problem?

              I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic