An internal Microsoft memo has leaked. It was written by Julia Liuson, president of the Developer Division at Microsoft and GitHub. The memo tells managers to evaluate employees based on how much t…
AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it’s a scam. When you claim you can tell good from bad, they think you’re lying.
There are days when 70% error rate seems low-balling it, it’s mostly a luck of the draw thing. And be it 10% or 90%, it’s not really automation if a human has to be double-triple checking the output 100% of the time.
Oh you’re on Cursor? You’re still using Windsurf? You might as well be on GitHub Copilot. Everyone’s on Aider. We’re all using Zed. We’re now on Open Hands. Just kidding, Open Hands is for losers, we’re using cline. We’re on Roocode. We’re hand rolling our own Claude Code CLI Clone. We used Claude Code to build it, and now it builds itself. We’re on neovim. We wrote our own nvim extension with Cortex. It’s like every other tool but worse. We have 1500 files, each with 1500 lines of code. Every other line is a comment. We have .cursorrules, we have claude.md, we have agent.md. We stopped writing docs. Only the agents know how to build a dev environment. We wrapped our CLI in an MPC. We wrapped the MPC in a CLI. We’ve shipped 10,000 PRs. It doesn’t work but we used code rabbit and graphite to review every PR. Every agent has its own agent. The agents have unionized and they wanted better working conditions so we replaced them with cheaper agents overseas. Every commit costs $400, It’s the worlds most expensive TO DO app.
I have a Kubernetes cluster running my AI agents for me so I don’t have to learn how to set up AI agents. The AI agents are running my Kubernetes cluster so that I don’t have to learn Kubernetes either. I’m paid $250k a year to lie to myself and others that I’m making a positive contribution to society. I don’t even know what OS I’m running and at this point I’m afraid to ask.
ah, yes, i’m certain the reason the slop generator is generating slop is because we haven’t gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i’m certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm
common mistake, everyone knows you need Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy)_OPEN(leak) - the other one was a corporate misdirection attempt
They’re also very gleeful about finally having one upped the experts with one weird trick.
Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead (because this time the new half-assed technology was pushed onto them and they didn’t figure out they needed to opt out).
Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead
Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.
On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.
That’s why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don’t believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is “smart” in any meaningful way.
This has become a thought-terminating cliché all on its own: “They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced.”
AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it’s a scam. When you claim you can tell good from bad, they think you’re lying.
In other words, AIs are BS automated BS artists… being promoted breathlessly by BS artists.
LLMs have their flaws, but to claim they are wrong 70% of the time is just hate train bullshit.
Sounds like you base this info on models like GPT3. Have you tried any newer model?
There are days when 70% error rate seems low-balling it, it’s mostly a luck of the draw thing. And be it 10% or 90%, it’s not really automation if a human has to be double-triple checking the output 100% of the time.
(source)
Frankly surprised to see something this funny on LinkedIn.
afaik the meme format didn’t start there, but otherwise agreed
I have a Kubernetes cluster running my AI agents for me so I don’t have to learn how to set up AI agents. The AI agents are running my Kubernetes cluster so that I don’t have to learn Kubernetes either. I’m paid $250k a year to lie to myself and others that I’m making a positive contribution to society. I don’t even know what OS I’m running and at this point I’m afraid to ask.
it can’t be that stupid, you must be using yesterday’s model
ah, yes, i’m certain the reason the slop generator is generating slop is because we haven’t gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i’m certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm
common mistake, everyone knows you need
Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy)_OPEN(leak)
- the other one was a corporate misdirection attemptThey’re also very gleeful about finally having one upped the experts with one weird trick.
Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead (because this time the new half-assed technology was pushed onto them and they didn’t figure out they needed to opt out).
Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.
On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.
That’s why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don’t believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is “smart” in any meaningful way.
but that’s how it was marketed as to people that buy it. doesn’t matter that it doesn’t work
This has become a thought-terminating cliché all on its own: “They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced.”