“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”
Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.
That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it’s as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn’t provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let’s not just twist what he said to be “Microsoft CEO says AI provides no value” when that is not what he said.
AI is the immigrants of the left.
Of course he didn’t say this. The media want you to think he did.
“They’re taking your jobs”
microsoft rn:
✋ AI
👉 quantum
can’t wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!
Forgive my ignorance. Is there no known quantum-safe symmetric-key encryption algorithm?
i’m not an expert by any means, but from what i understand, most symmetric key and hashing cryptography will probably be fine, but asymmetric-key cryptography will be where the problems are. lots of stuff uses asymmetric-key cryptography, like https for example.
Oh that’s not good. i thought TLS was quantum safe already
That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.
It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.
I just can’t see AI tools like ChatGPT ever being profitable. It’s a neat little thing that has flaws but generally works well, but I’m just putzing around in the free version. There’s no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.
This summarizes it well: https://www.wheresyoured.at/wheres-the-money/
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.
You’re not going to replace a senior dev with it, of course, but it’s a great tool.
My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.
Not this again… LLM is a subset of ML which is a subset of AI.
AI is very very broad and all of ML fits into it.
This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.
AI is burning a shit ton of energy and researchers’ time though!
That’s not the worst. It is burning billions for the companies with no signs of them ever becoming close to profitable.
You say this like it’s a bad thing?
Indeed, we just have to wait until venture capital realized no one’s getting their money back from this. It’ll all crumble down like the world is ending.
R&D is always a money sink
It isn’t R&D anymore if you’re actively marketing it.
Uh… Used to be, and should be. But the entire industry has embraced treating production as test now. We sell alpha release games as mainstream releases. Microsoft fired QC long ago. They push out world breaking updates every other month.
And people have forked over their money with smiles.
That’s because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!..I’d love a sling thong for my wife…bam! Here’s 20 ads, just click to purchase since they already stole your wife’s boob size and body measurements and preferred lingerie styles. And if you’re on McMaster… Hmm I need a 1/2 pipe and a cap…Better get two caps in case you cross thread on…ding dong! FBI! We know you’re in there! Come out with your hands up!
The only thing stopping me from switching to Linux is some college software (Won’t need it when I’m done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)
So I’m on the verge of going Penguin.
Yeah use Windows in a VM and your game probably just works too, I was surprised that all games I have on Steam now just work on Linux.
Years ago when I switched from OSX to Linux I just stopped gaming because of that but I started testing my old games and suddenly no problems with them anymore.
Just run Windows in a VM on Linux. You can use VirtualBox.
Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…
I half expect to next read an article about his ouster.
If it seems odd for him to suddenly say that all this AI stuff is bullshit, that’s because he didn’t. He said it hasn’t boosted the world economy on the order of the Industrial revolution - yet. There is so much hype around this, and he’s on the line to deliver actual results. So it’s smart for him to take a little air out of the hype ballon. But the article headline is a total misrepresentation of what he said. He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board. That’s very very different from “no value.”
He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board.
That’s such an odd turn of phrase. “We’re still waiting for the hype to become a reality…” and “…something obvious and impossible to miss…”
So, like, do I have to time to go to the bathroom and get a drink, before I sit down and start staring at the empty space, or…?
Don’t get me wrong. I work with this stuff every day at this point. My job is LLMs and model training pipelines and agentic frameworks. But, there is something… off, about saying the equivalent of “it’ll happen any day now…”
It may just, but making forward-looking decisions on something that doesn’t exist—may not come to pass—feels like madness.
Again, you’re twisting the words.
He said “we’re still waiting” and you’ve twisted that into “any day now.” As if still waiting for something to materialize is the same thing as being certain it will.
Hype always precedes reality. That’s the nature of hype. If someone says hype is nice but we’re still waiting to see it become reality that is not blind faith in the hype. The complete opposite. It’s restraint.
Anyway. People get some kind of emotional catharsis from hating AI and shitting on CEOs so I think a fair reading of this is just going to go out the window at any opportunity. I won’t spend any more energy trying to contain that.
Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.
AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.
Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.
alphafold is not an LLM, so no, not really
You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.