Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?
Do you?
Do I?
Where do thoughts come from? Are you the thought or the thing experiencing the thought? Which holds the intelligence?
I know enough about thought to know that you aren’t planning the words you are about to think next, at least not with any conscious effort. I also know that people tend to not actually know what it is they are trying to say or think until they go through the process; start talking and the words flow.
Not altogether that different than next token prediction; maybe just with a network 100x as deep…
This gets really deep into how we’re all made of not alive things and atoms and yet here we are, and why is it no other planet has life like us etc. Also super philosophical!
But truly, the LLMs don’t understand things they say, and Apple apparently just put out a paper saying they don’t reason either (if you consider that to be different from understanding). They’re claiming it’s all fancy pattern recognition. (Putting link below of interested)
https://machinelearning.apple.com/research/illusion-of-thinking
Another difference between a human and an LLM is likely the ability to understand semantics within syntax, rather than just text alone.
I feel like there’s more that I want to add but I can’t quite think of how to say it so I’ll stop here.
An interesting study I recall from my neuroscience classes is that we “decide” on what to do (or in this case, what to say) slightly before we’re aware of the decision, and then our brain comes up with a story about why we made that decision so that it feels like we have agency.