Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I think what you are describing is “agency” and not necessarily intelligence.

    A gold fish has agency, but no amount if exposure to linear algebra will give them the ability to transpose a matrix.

    • Firebat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      What I tried to say is that if the LLM doesn’t actually understand anything it says, it’s not actually intelligent is it? Inputs get astonishingly good outputs, but it’s not real AI.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        LLM doesn’t actually understand anything it says

        Do you?

        Do I?

        Where do thoughts come from? Are you the thought or the thing experiencing the thought? Which holds the intelligence?

        I know enough about thought to know that you aren’t planning the words you are about to think next, at least not with any conscious effort. I also know that people tend to not actually know what it is they are trying to say or think until they go through the process; start talking and the words flow.

        Not altogether that different than next token prediction; maybe just with a network 100x as deep…

        • Firebat@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          This gets really deep into how we’re all made of not alive things and atoms and yet here we are, and why is it no other planet has life like us etc. Also super philosophical!

          But truly, the LLMs don’t understand things they say, and Apple apparently just put out a paper saying they don’t reason either (if you consider that to be different from understanding). They’re claiming it’s all fancy pattern recognition. (Putting link below of interested)

          https://machinelearning.apple.com/research/illusion-of-thinking

          Another difference between a human and an LLM is likely the ability to understand semantics within syntax, rather than just text alone.

          I feel like there’s more that I want to add but I can’t quite think of how to say it so I’ll stop here.

        • howrar@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          An interesting study I recall from my neuroscience classes is that we “decide” on what to do (or in this case, what to say) slightly before we’re aware of the decision, and then our brain comes up with a story about why we made that decision so that it feels like we have agency.