In their race to push out new versions with more capability, AI companies leave users vulnerable to “LLM grooming” efforts that promote bogus information.

Summary

Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform.

Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.

“Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”

Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations.

  • Arcane2077@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    It’s always framed as a Russia/China thing, but when you ask chatgpt about the president, Elon Musk or Israel, suddenly it has limited things to say, or even occasionally refuses to answer.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 days ago

      Poisoning AI with junk info and sabotaging with specific misinformation in order to spread it are different things. This is describing the latter.