• Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      While I do think that it’s simply bad at generating answers because that is all that’s going on, generating the most likely next word that works a lot of the time but then can fail spectacularly…

      What if we’ve created AI but by training it with internet content, we’re simply being trolled by the ultimate troll combination ever.

      • seaQueue@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        This is what happens when you train your magical AI on a decade+ of internet shitposting

  • TheGoldenGod@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It’s even dumber than expecting you can have a child from anal sex.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn’t know the exact site, so I searched for it on Google.

    Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google’s shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.

    This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    In the late 90s and early 2000s, internet search engines were designed to actually find relevant things … it’s what made Google famous

    Since the 2010s, internet search engines have all been about monetizing, optimizing, directing, misdirecting, manipulating searches in order to drive users to the highest paying companies or businesses, groups or individuals that best knew how to use Search Engine Optimization. For the past 20 years, we’ve created an internet based on how we can manipulate everyone and everything in order to make someone money. The internet is no longer designed to freely and openly share information … it’s now just a wasteland of misinformation, disinformation, nonsense and manipulation because we are all trying to make money off one another in some way.

    AI is just making all those problems bigger, faster and more chaotic. It’s trying to make money for someone but it doesn’t know how yet … but they sure are working on trying to figure it out.

  • JustEnoughDucks@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    And then I get down voted for laughing when people say that they use AI for “general research” 🙄🙄🙄

    • Mike_The_TV@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I’ve had people legitimately post the answer they got from chat gpt to answer someone’s question and then get annoyed when people tell them its wrong.