Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

  • venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    13 hours ago

    That’s not how it works though. It would be great if these AI models were deterministic but you can get different answers to the same questions at any given time. Given different input and given different goals, the agents wouldn’t likely fail on the same task when given proper instruction.

    The main point is that it’s not going to be correct all the time. And neither is a human.

    The regulation comes in when you’re dealing with sensitive information, like health diagnoses. There needs to be some logic in place to stop the models from being so confident with wrong answers that could hurt people.

    Realistically, neither of us know what’s gonna work until we try it. Theoretically, verification agents would work.