even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs
This will never ever be used in a surveillance capacity by an administration that’s turning the country into a fascist hyper capitalist oligarchical hellscape. Definitely not. No way. It can’t happen here.
It reminds me of the kids in 1984 who turn their father in for being an enemy of the state
The headline: ChatGPT Will Soon Remember Everything You’ve Ever Told It
The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:
OpenAI is rolling out a new update to ChatGPT’s memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.
ChatGPT’s memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today… Previously, the bot stored those data points in a bank of “saved memories.” You could access this memory bank at any time and see what the bot had stored based on your conversations… However, it wasn’t perfect, and couldn’t naturally pull from past conversations, as a feature like “memory” might imply.
I’m not going to defend OpenAI in general, but that difference is meaningless outside of how the LLM interacts with you.
If data privacy is your focus, it doesn’t matter that the LLM has access to it during your session to modify how it reacts to you. They don’t need the LLM at all to use that history.
This isn’t an “I’m out” type of change for privacy. If it is, you missed your stop when they started keeping a history.
Yeah, like they have the history already…
ai systems that get to know you over your life
That’s not as attractive as Sam Altman thinks it is.
What worries me, is all the info from those conversations actually becoming public. I haven’t fed it personal info, but I bet a lot of people do. Not only stuff you might tell it, but information fed from people you know. Friends, family, acquaintances, even enemies could say some really personal or downright false things about you to it and it could one day add that to public ChatGPT. Sounds like some sort of Black Mirror episode, but I think it could happen. Wouldn’t be surprised if intelligence agencies already have access to this data. Maybe one day cyber criminals or even potential employers will have all this data too.
In related news:
Blocking outputs isn’t enough; dad wants OpenAI to delete the false information.