

You’re still putting words in my mouth.
I never said they weren’t stealing the data
I didn’t comment on that at all, because it’s not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn’t.
That’s not obviously the case. I don’t think anyone has a sufficient understanding of general AI, or of consciousness, to say with any confidence what is or is not relevant.
We can agree that LLMs are not going to be turned into general AI though