• luciole (he/him)@beehaw.org
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 days ago

    I’m making a generous assumption by suggesting that “ready” is even possible

    To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 days ago

      There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn’t going to be possible to get where they want to go.