Everyone seems to be complaining about LLMs not actually being intelligent. Does anyone know if there are alternatives out there, either in theory or already made, that you would consider to be ‘more intelligent’ or have the potential to be more intelligent than LLMs?

  • FortyTwo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Agreed with the points about intelligence definition, but on a pragmatic note, I’ll list some concrete examples of fields in AI that are not LLMs (I’ll leave it up to your judgement if they’re “more intelligent” or not):

    • Machine learning, most of the concrete examples other people gave here were deep learning models. They’re used a lot, but certainly don’t represent all of AI. ML is essentially fitting a function by tuning the function’s parameters using data. Many sub-fields like uncertainty quantification, time-series forecasting, meta-learning, representation learning, surrogate modelling and emulation, etc.
    • Optimisation, containing both gradient-based and black-box methods. These methods are about finding parameter values that maximise or minimise a function. Machine learning is also an optimisation problem, and is usually performed using gradient-based methods.
    • Reinforcement learning, which often involves a deep neural network to estimate state values, but is itself a framework for assigning values to states, and learning the optimal policy to maximise reward. When you hear about agents, often they will be using RL.
    • Formal methods for solving NP-hard problems, popular examples include TSP and SAT. Basically trying to solve these problems efficiently and with theoretical guarantees of accuracy. All of the hardware you use will have had its validity checked through this type of method at some point.
    • Causal inference and discovery. Trying to identify causal relationships from observational data when random controlled trials are not feasible, using theoretical proofs to establish when we can and cannot interpret a statistical association as a causal relationship.
    • Bayesian inference and learning theory methods, not quite ML but highly related. Using Bayesian statistical methods and often MCMC methods to perform statistical inference of the posterior with normally intractable marginal likelihoods. It’s mostly statistics with AI helping out to enable us to actually compute things.
    • Robotics, not a field I know much about, but it’s about physical agents interacting with the real world, which comes with many additional challenges.

    This list is by no means exhaustive, and there is often overlap between fields as they use each other’s solutions to advance their own state of the art, but I hope this helped for people who always hear that “AI is much more than LLMs” but don’t know what else is there. A common theme is that we use computational methods to answer questions, particularly those we couldn’t easily answer ourselves.

    To me, what sets AI apart from the rest of computer science is that we don’t do “P” problems: if there is a method available to directly or analytically compute the solution, I usually wouldn’t call it AI. As a basic example, I don’t consider computing y = ax+b coefficients analytically as AI, but do consider general approximations of linear models using ML AI.