cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
While I do think that it’s simply bad at generating answers because that is all that’s going on, generating the most likely next word that works a lot of the time but then can fail spectacularly…
What if we’ve created AI but by training it with internet content, we’re simply being trolled by the ultimate troll combination ever.
This is what happens when you train your magical AI on a decade+ of internet shitposting
Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It’s even dumber than expecting you can have a child from anal sex.
I’m shocked!
Shocked I tell you!
Only 60%‽
Blows my mind that it’s so low.
Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn’t know the exact site, so I searched for it on Google.
Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google’s shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.
This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?
In the late 90s and early 2000s, internet search engines were designed to actually find relevant things … it’s what made Google famous
Since the 2010s, internet search engines have all been about monetizing, optimizing, directing, misdirecting, manipulating searches in order to drive users to the highest paying companies or businesses, groups or individuals that best knew how to use Search Engine Optimization. For the past 20 years, we’ve created an internet based on how we can manipulate everyone and everything in order to make someone money. The internet is no longer designed to freely and openly share information … it’s now just a wasteland of misinformation, disinformation, nonsense and manipulation because we are all trying to make money off one another in some way.
AI is just making all those problems bigger, faster and more chaotic. It’s trying to make money for someone but it doesn’t know how yet … but they sure are working on trying to figure it out.
And then I get down voted for laughing when people say that they use AI for “general research” 🙄🙄🙄
I’ve had people legitimately post the answer they got from chat gpt to answer someone’s question and then get annoyed when people tell them its wrong.
“I’m not sure, but ChatGPT says…”
No, fuck off, go back to grade school.
Oh man, that’s too good. Thanks for sharing this. Now I kinda want to ask it about blue waffles, but I’m a little scared to.