Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?
Agreed with the points about intelligence definition, but on a pragmatic note, I’ll list some concrete examples of fields in AI that are not LLMs (I’ll leave it up to your judgement if they’re “more intelligent” or not):
This list is by no means exhaustive, and there is often overlap between fields as they use each other’s solutions to advance their own state of the art, but I hope this helped for people who always hear that “AI is much more than LLMs” but don’t know what else is there. A common theme is that we use computational methods to answer questions, particularly those we couldn’t easily answer ourselves.
To me, what sets AI apart from the rest of computer science is that we don’t do “P” problems: if there is a method available to directly or analytically compute the solution, I usually wouldn’t call it AI. As a basic example, I don’t consider computing y = ax+b coefficients analytically as AI, but do consider general approximations of linear models using ML AI.