In a new study, MIT Media Lab measured 55 people over four months on how well they could write an essay — either with ChatGPT, with a search engine, or just their unassisted brain. The researchers …
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people’s handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn’t any less correct than one that had been memorized (probably more so), the same couldn’t be said about chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc.
Show me a study where they find typewriter users “consistently underperformed at neural, linguistic, and behavioral levels”
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It’s important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn’t have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
This article is about a scientific study that shows clear differences in brain activity between people who used LLMs and those who didn’t. If you can’t tell the difference between that and whatever the hell you’re going on about, you might want to cut down on the LLM usage.
Maybe explain to me how people baselessly criticising technology (like typewriters for example) for making people dumber is the same as a scientific study showing differences in EEG activity?
Or you could read the entirety of the first comment in this thread and see how it was not saying that. Notice the part that begins, “However, I believe there is an important difference to chatbots…”
LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).
Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that’s bullshit, because LLMs just aren’t capable of doing any of these things in a meaningful way.
chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
You need an expertise to tell where the error is happening or even notice it at all.
The main argument against LLMs is that you don’t get a systemic knowledge how e.g. the code works. Moreso, as long as it works right, you aren’t motivated to look into it and understand just how it does so. That renders you unable to see and fix mistakes.
If you are an experienced task-doer, yes, a quick glance can help you with that. But if you start out with LLM vibe-coding, you jump over a couple of steps, and when confronted with an error, you need to apply more force (learning things you skipped) to make it right.
Personally, I grew seriously ill when we had math classes concerning sin\cos\tg\ctg basics and going forward I felt I just don’t get it at all as the class marched towards further subjects based on them. It costed me way more labor to get on the same page than if I’ve never been absent. The same could’ve happened, I believe, if I could upload my homework into LLM, only to face exams where I can’t use it anymore, or if I came upon real life applications - and surprisingly I did.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it’s only when you do it for yourself that you discover what you really know and what you don’t. It’s the difference between learning what a square root actually is and how to press the square root button on the calculator. It’s the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn’t all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn’t outright make a bunch of stuff up.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people’s handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn’t any less correct than one that had been memorized (probably more so), the same couldn’t be said about chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Show me a study where they find typewriter users “consistently underperformed at neural, linguistic, and behavioral levels”
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It’s important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn’t have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
… what.
This article is about a scientific study that shows clear differences in brain activity between people who used LLMs and those who didn’t. If you can’t tell the difference between that and whatever the hell you’re going on about, you might want to cut down on the LLM usage.
Maybe do some self reflection first? You’re missing their point and it’s bigger than Saturn.
Maybe explain to me how people baselessly criticising technology (like typewriters for example) for making people dumber is the same as a scientific study showing differences in EEG activity?
Or you could read the entirety of the first comment in this thread and see how it was not saying that. Notice the part that begins, “However, I believe there is an important difference to chatbots…”
It’s not. Read again.
LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).
Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that’s bullshit, because LLMs just aren’t capable of doing any of these things in a meaningful way.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
You need an expertise to tell where the error is happening or even notice it at all.
The main argument against LLMs is that you don’t get a systemic knowledge how e.g. the code works. Moreso, as long as it works right, you aren’t motivated to look into it and understand just how it does so. That renders you unable to see and fix mistakes.
If you are an experienced task-doer, yes, a quick glance can help you with that. But if you start out with LLM vibe-coding, you jump over a couple of steps, and when confronted with an error, you need to apply more force (learning things you skipped) to make it right.
Personally, I grew seriously ill when we had math classes concerning sin\cos\tg\ctg basics and going forward I felt I just don’t get it at all as the class marched towards further subjects based on them. It costed me way more labor to get on the same page than if I’ve never been absent. The same could’ve happened, I believe, if I could upload my homework into LLM, only to face exams where I can’t use it anymore, or if I came upon real life applications - and surprisingly I did.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it’s only when you do it for yourself that you discover what you really know and what you don’t. It’s the difference between learning what a square root actually is and how to press the square root button on the calculator. It’s the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn’t all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn’t outright make a bunch of stuff up.
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?