• henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 days ago

    Very cool work! I read the abstract of the paper. I don’t think it needs to use the “AI” buzzword because his work is already impressive and stands on its own, though, and the work has nothing to do with LLMs.

    • Deebster@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      It uses a neutral net that he designed and trained, so it is AI. The public’s view of “AI” seems mostly the generation stuff like chatbots and image gen, but deep learning is perfect for science and medical fields.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Because that’s what the buzzword has come to mean. It’s not Lemmings’ fault, it’s the shitty capitalists pushing this slop.

      • hihi24522@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        The term “artificial intelligence” is supposed to refer to a computer simulating the actions/behavior of a human.

        LLMs can mimic human communication and therefore fits the AI definition.

        Generative AI for images is a much looser fit but it still fulfills a purpose that was until recently something most or thought only humans could do, so some people think it counts as AI

        However some of the earliest AI’s in computer programs were just NPCs in video games, looong before deep learning became a widespread thing.

        Enemies in video games (typically referring to the algorithms used for their pathfinding) are AI whether they use neural networks or not.

        Deep learning neural networks are predictive mathematic models that can be tuned from data like in linear regression. This, in itself, is not AI.

        Transformers are a special structure that can be implemented in a neural network to attenuate certain inputs. (This is how ChatGPT can act like it has object permanence or any sort of memory when it doesn’t) Again, this kind of predictive model is not AI any more than using Simpson’s Rule to calculate a missing coordinate in a dataset would be AI.

        Neural networks can be used to mimic human actions, and when they do, that fits the definition. But the techniques and math behind the models is not AI.

        The only people who refer to non-AI things as AI are people who don’t know what they’re talking about, or people who are using it as a buzzword for financial gain (in the case of most corporate executives and tech-bros it is both)

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          The term “Artificial Intelligence” has been bandied around for over 50 years to mean all sorts of things.

          These days, all sorts of machine learning are generally classified as AI.

          But I used to work with Cyc and expert systems back in the 90s, and those were considered AI back then, even though they often weren’t trying to mimic human thought.

          For that matter, the use of Lisp in the 1970s to perform recursive logic was considered AI all by itself.

          So while you may personally prefer a more restrictive definition, just as many were up in arms with “hacker” being co-opted to refer to people doing digital burglary, AI as the term is used by the English speaking world encompasses generative and diffusive creation models and also other less human-centric computing models that rely on machine learning principles.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 days ago

            According to gamedevs, 1-player pong (that is, vs computer) involves AI. It’s a description of role within the game world, not implementation, or indeed degree of intelligence, or amount of power. Could be a rabbit doing little more than running away scared, a general strategising, or a right-out god toying with the world, a story-telling AI. Key aspect though is reacting to and influence on the game itself or at least some sense of internal goals, agency, that set it apart from mere physics, it can’t just follow a blind script. The computer paddle in pong fits the bill: It reacts dynamically to the ball position, it wants to score points against the player, thus, AI. The ball is also simulated, possibly even using more complex maths than the paddle, but it doesn’t have that role of independent agent.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    Every day a new Einstein is born, and their life and choices are dictated by the level of wealth and opportunity they are born into.

    We would see stories like this every week if wealth and opportunities were equally distributed.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    The model was run (and I think trained?) on very modest hardware:

    The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.

    That’s a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, an aging circa-2018 setup. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.

    The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines