OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy.

  • Eheran@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    1 month ago
    1. What does this have to do with privacy?
    2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
    • boonhet@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 month ago
      1. You can ask Google to take down malicious requests for your name. With ChatGPT it’s never guaranteed.
      2. ChatGPT is often used as a search engine so anything wrong it says IS spreading bullshit online.
    • Uranium 🟩@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago
      1. Is this a quirk of the fediverse?

      The community this has been posted in for me is Technology, not Privacy

      2.And those people should also face scrutiny if they are making up potentially life ruining stuff such as accusing someone being a child murderer. The bit I’d want some context for, is whether this is a one off hallucination, or a consistent one that multiple seperate users could see if they asked about this person.

      If it’s a one of hallucination, it’s not good, but nowhere near as bad as a consistent ‘hard baked’ hallucination.

      • .Donuts@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        OpenAI was hit was a privacy complaint, don’t think the comment was about which community this was in

      • Eheran@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        It is literally not. He chatted with it, it always gives some answer. This is not privacy related. It is made up in a private chat.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 month ago
      1. It doesn’t. I’m with you there.
      2. Many countries in Europe have very strong anti-defamation laws, unlike in the US. What you are allowed to say about people is very different from what you are allowed to say about practically anything else. Since OpenAI is in control of the model, it is their responsibility to ensure it doesn’t produce results like these.