Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?

  • Firebat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I think that it’s important to consider that language can evolve or mean different things over time. Regarding “artificial intelligence,” this is really just a renaming of machine learning algorithms. They definitely do seem “intelligent” but having intelligence and seemingly having it are two different things.

    Currently, real AI, or what is being called “Artificial General Intelligence,” doesn’t exist yet.

    How are you defining intelligence anyways?

    • propitiouspanda@lemmy.cafeOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      How are you defining intelligence anyways?

      It would actually depend on the people saying LLMs are not intelligent.

      I assume they’re referring to intelligence as we see in humans and perhaps animals.

      • Firebat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I want to say then that probably counts as intelligence, as you can converse with LLMs and have really insightful discussions with them, but I personally just can’t agree that they are “intelligent” given that they do not understand anything they say.

        I’m unsure if you’ve read of the Chinese Room but Wikipedia has a good article on it

        https://en.m.wikipedia.org/wiki/Chinese_room

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          I think what you are describing is “agency” and not necessarily intelligence.

          A gold fish has agency, but no amount if exposure to linear algebra will give them the ability to transpose a matrix.

          • Firebat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            What I tried to say is that if the LLM doesn’t actually understand anything it says, it’s not actually intelligent is it? Inputs get astonishingly good outputs, but it’s not real AI.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 days ago

              LLM doesn’t actually understand anything it says

              Do you?

              Do I?

              Where do thoughts come from? Are you the thought or the thing experiencing the thought? Which holds the intelligence?

              I know enough about thought to know that you aren’t planning the words you are about to think next, at least not with any conscious effort. I also know that people tend to not actually know what it is they are trying to say or think until they go through the process; start talking and the words flow.

              Not altogether that different than next token prediction; maybe just with a network 100x as deep…

              • Firebat@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 days ago

                This gets really deep into how we’re all made of not alive things and atoms and yet here we are, and why is it no other planet has life like us etc. Also super philosophical!

                But truly, the LLMs don’t understand things they say, and Apple apparently just put out a paper saying they don’t reason either (if you consider that to be different from understanding). They’re claiming it’s all fancy pattern recognition. (Putting link below of interested)

                https://machinelearning.apple.com/research/illusion-of-thinking

                Another difference between a human and an LLM is likely the ability to understand semantics within syntax, rather than just text alone.

                I feel like there’s more that I want to add but I can’t quite think of how to say it so I’ll stop here.

              • howrar@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 days ago

                An interesting study I recall from my neuroscience classes is that we “decide” on what to do (or in this case, what to say) slightly before we’re aware of the decision, and then our brain comes up with a story about why we made that decision so that it feels like we have agency.

        • propitiouspanda@lemmy.cafeOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Thank you. This is exactly what I was trying to discuss.

          Do you think there’s anything that can change about AI to make it intelligent by your standard?

          • Firebat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            I’m of the idea that AI will eventually “emerge” from ongoing efforts to produce it, so to say if there’s anything that can “change” about current AI as people call it now is somewhat moot.

            I think LLMs are a dead end to producing real AI though, but no one REALLY knows cause it just hasn’t happened yet.

            Not recommending anyone buy crypto, but I’ve been following Qubic and find it really interesting. Its anyone’s guess what organization will create real AI though.