Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?
What would need to change about LLMs to make them closer to ‘true AI’ or even real intelligence? Or do you think there needs to be a different approach altogether?
‘true AI’ and ‘real intelligence’ don’t have any extrinsic meaning.
If you can’t tell its not a human, is that good enough? I mean we would have to dumb the current systems down to fool people these days. Then you ask them something off base and they get it completely wrong. Then you ask someone a question and they get it completely wrong.
Without a useful and static definition of intelligence, its not really an answerable question.