even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    This will never ever be used in a surveillance capacity by an administration that’s turning the country into a fascist hyper capitalist oligarchical hellscape. Definitely not. No way. It can’t happen here.

    • PattyMcB@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      It reminds me of the kids in 1984 who turn their father in for being an enemy of the state

  • huppakee@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    The headline: ChatGPT Will Soon Remember Everything You’ve Ever Told It

    • Australis13@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:

      OpenAI is rolling out a new update to ChatGPT’s memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.

      ChatGPT’s memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today… Previously, the bot stored those data points in a bank of “saved memories.” You could access this memory bank at any time and see what the bot had stored based on your conversations… However, it wasn’t perfect, and couldn’t naturally pull from past conversations, as a feature like “memory” might imply.

  • zenpocalypse@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    I’m not going to defend OpenAI in general, but that difference is meaningless outside of how the LLM interacts with you.

    If data privacy is your focus, it doesn’t matter that the LLM has access to it during your session to modify how it reacts to you. They don’t need the LLM at all to use that history.

    This isn’t an “I’m out” type of change for privacy. If it is, you missed your stop when they started keeping a history.

  • mattc@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    What worries me, is all the info from those conversations actually becoming public. I haven’t fed it personal info, but I bet a lot of people do. Not only stuff you might tell it, but information fed from people you know. Friends, family, acquaintances, even enemies could say some really personal or downright false things about you to it and it could one day add that to public ChatGPT. Sounds like some sort of Black Mirror episode, but I think it could happen. Wouldn’t be surprised if intelligence agencies already have access to this data. Maybe one day cyber criminals or even potential employers will have all this data too.