But every techbro on the planet told me it’s exactly what LLMs are good at. What the hell!? /s
Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask “how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?” They don’t know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.
the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though
It’s not like they’re flat earthers they are not conspiracy theorists. They have been told by the media, businesses, and every goddamn YouTuber that AI is the future.
I don’t think they are idiots I just think they are being lied to and are a bit gullible. But it’s not worth having the argument with them, AI is going to fail on its own it doesn’t matter what they think.
As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)
The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later
Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things
Nonsense, I use it a ton for science and engineering, it saves me SO much time!
Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.
But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.
ShockedPikachu.svg
Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.
BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline
Some examples of inaccuracies found by the BBC included:
Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" *and described Israel's actions as "aggressive"*
I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.
This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.
Could you tell me what you use it for because I legitimately don’t understand what I’m supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I’m heading IT, so I’m supposed to be able to come up with some kind of answer and yet I have nothing. Even putting aside the fact that it probably doesn’t work as advertised, I still can’t really think of a use for it.
The main problem is it won’t be able to operate our ancient and convoluted ticketing system, so it can’t actually help.
Everyone I’ve ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
I am a creative writer (as in, I write stories and stuff) or at least I used to be. Sometimes when talking to chatGPT about ideas for writing it can be interesting, but other times it is kinda annoying since I am more into fine tuning instead of having it innudate me with ideas that I don’t find particularly interesting.
Great for turning complex into simple.
Bad for turning simple into complex.
I think my largest gripe with it is it can’t actually do anything. It can just tell you about stuff.
I can ask it how to change the desktop background on my computer and it will 100% be able to tell me, but if you then prompt it to change the background itself it won’t be able to. It has zero ability to interact with the computer, this is even the case with AI run locally.
It can’t move the mouse around it can’t send keyboard commands.
You don’t say.
Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.
It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
Introduced factual errors
Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.
alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.
It’s easy, it’s quick, and it’s free: pouring river water in your socks.
Fortunately, there are other possible criteria.Flip a coin every time you read an article whether you get quick and easy significant issues
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.
But I’d guess the AI is quite a bit better than, say, the average Republican.
I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.
There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.
I’m more interested in the technology itself, rather than its current application.
I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!
Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.
Do you dislike ai?
I work in tech and can confirm the the vast majority of engineers “dislike ai” and are disillusioned with AI tools. Even ones that work on AI/ML tools. It’s fewer and fewer people the higher up the pay scale you go.
There isn’t a single complex coding problem an AI can solve. If you don’t understand something and it helps you write it I’ll close the MR and delete your code since it’s worthless. You have to understand what you write. I do not care if it works. You have to understand every line.
“But I use it just fine and I’m an…”
Then you’re not an engineer and you shouldn’t have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.
“I can calculate powers with decimal values in the exponent and if you can not do that on paper but instead use these machines, your calculations are worthless and you are not an engineer”
You seem to fail to see that this new tool has unique strengths. As the other guy said, it is just like people ranting about Wikipedia. Absurd.
You can also just have an application designed to do that do it more accurately.
If you can’t do that you’re not an engineer. If you don’t recommend that you’re not an engineer.
That’s some weird gatekeeping. Why stop there? Whoever is using a linter is obviously too stupid to write clean code right off the bat. Syntax highlighting is for noobs.
I full-heartedly dislike people that think they need to define some arcane rules how a task is achieved instead of just looking at the output.
Accept that you probably already have merged code that was generated by AI and it’s totally fine as long as tests are passing and it fits the architecture.
That’s why I avoid them like the plague. I’ve even changed almost every platform I’m using to get away from the AI-pocalypse.
I can’t stand the corporate double think.
Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it’s still apparently going to replace humans. How do they come to that conclusion?
The world won’t be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.
Que global economic collapse.
I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.
Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?
Or do you mean that you tried having DeepSeek summarise a couple of articles, didn’t see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others’ evaluations of) information tools which might help or help destroy democracy.
Yes, I think it would be naive to expect humans to design something capable of what humans are not.
We do that all the time. It’s kind of humanity’s thing. I can’t run 60mph, but my car sure can.
Qualitatively.
That response doesn’t make sense. Please clarify.
A human can move, a car can move. a human can’t move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that’s how I used those words.
Ooooooh. Ok that makes sense.
With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.
The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.
Yes.
But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.
That’s fundamentally solvable.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.
While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
What temperature and sampling settings? Which models?
I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.
I don’t think giving the temperature knob to end users is the answer.
Turning it to max for max correctness and low creativity won’t work in an intuitive way.
Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.
Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”
Not everyone is an engineer. Temp is an obtuse thing.
But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.
I loathe how these things are advertised by Apple, Google and Microsoft.
-
Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.
-
Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.
-
It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which “inbreeds” the model.
-
And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?
What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.
-