A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.
SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.
Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.
From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.
So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.
A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.
It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.
I think it largely depends on what kind of AI we’re talking about. iOS has had models that let you extract subjects from images for a while now, and that’s pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.
As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn’t handle Swedish? I don’t know.
One of the examples I sent to a friend is as follows, but in Swedish;
Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don’t understand why we pay for this. It’s very disappointing.
And CoPilot was like “yeah, let me fix this for you!”
Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.
Most AIs struggle with languages other than English, unfortunately, I hate how it reinforces the “defaultness” of English
We’re not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn’t that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.
The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn’t something that you’re doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.
People love to make these claims.
Nothing is “100% accurate” to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.
So either we acknowledge that everything is already “sewage” and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.
Which gets to my big issue with most of the “AI Assistant” features. They don’t source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead “ask jeeves” as it were. But I still want the citation of where information was pulled from so I can at least skim it.
99.999% would be fantastic.
90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).
What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?
I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.
Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.
Again: What is the percent “accurate” of an SEO infested blog about why ivermectin will cure all your problems? What is the percent “accurate” of some kid on gamefaqs insisting that you totally can see Lara’s tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.
Everyone is hellbent on insisting that AI hallucinates and… it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can’t do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.
Like I said: I don’t like the AI Assistants that won’t tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.
But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.
For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you’re trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.
“useless” is a more positive impression than I have.
A damning result for AI pump and dump scammers.
“Stop trying to make
fetchAI happen. It’s not going to happen.”AI is worse that adding no value, it is an actual detriment.
AI is useless and I block it anyway I can.
I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.
NO, i dont want fake AI depth of field. NO, i do not want fake AI “makeup” fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.
NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be “fixing” that for me
Is there a black mirror episode for that? A technology, that automatically edits your memories to be inaccurate, but “better”.
classic techbro overhype
Add new feature into everything without seperating and offering choice to opt out of it
The first thing I do with a new phone is turn off any kind of assistance.
It is absolutely useless for everyday simple tasks I find.
Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?
IT’S FIVE LINES
Get out of my face Gemini!
The AI thing I’d really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don’t allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.
What I don’t want is:
- Ways to make fake photographs
- Summaries of messages I could just skim the old fashioned way
- Easier access to LLM chatbots
It seems like those are the main AI features bundled on phones now, and I have no use for any of them.
That’s useful AI that doesn’t take billions of dollars to train, though. (it’s also a great idea and I’d be down for it)
You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?
/s
This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.
Imagine if AI actually worked for users:
- Show me all settings to block data sharing and maximize privacy.
- Explain how you optimized my battery last week and how much time it saved.
- Automatically silence spam calls without selling my data to third parties.
- Detect and block apps that secretly drain data or access my microphone.
- Automatically organize my photos by topic without uploading them to the cloud.
- Make everything i could do with TASKER with only just saying it in plain words.
How could you ensure AI to privately sort your pictures, if the requests to analyze your sensitive imagery need to be made on a server? (that based its knowledge of disrespecting others copyright anyway, lol)
Why it must connect to a server to do it? Why can not offline? Deepseek showed us that it is possible. The companies want everyone to think that AI only works online. For example AI image enhancements in my mid range Samsung phone work offline.
oh my bad, sorry im not well versed.
Thats why I asked :p
A lot of people think as a must that AI = permanent server connection. I don’t mind if it is a bit slower but part of my device.
AI was never meant for the average person but the average person had to be convinced it was for funding.