From p. 137:
The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one’s own essay. LLM users significantly underperformed in this domain, with 83% of participants (15/18) reporting difficulty quoting in Session 1, and none providing correct quotes. This impairment persisted albeit attenuated in subsequent sessions, with 6 out of 18 participants still failing to quote correctly by Session 3. […] Search Engine and Brain-only participants did not display such impairments. By Session 2, both groups achieved near-perfect quoting ability, and by Session 3, 100% of both groups’ participants reported the ability to quote their essays, with only minor deviations in quoting accuracy.
Posted this on a Discord I’m in - one of the near immediate responses was “I’m glad they made a non-invasive procedure to lobotomise people”.
Nothing more to add, I just think that’s hilarious
chatbots really are leaded gasoline for zoomers
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people’s handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn’t any less correct than one that had been memorized (probably more so), the same couldn’t be said about chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc.
Show me a study where they find typewriter users “consistently underperformed at neural, linguistic, and behavioral levels”
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It’s important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn’t have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
… what.
This article is about a scientific study that shows clear differences in brain activity between people who used LLMs and those who didn’t. If you can’t tell the difference between that and whatever the hell you’re going on about, you might want to cut down on the LLM usage.
Maybe do some self reflection first? You’re missing their point and it’s bigger than Saturn.
Maybe explain to me how people baselessly criticising technology (like typewriters for example) for making people dumber is the same as a scientific study showing differences in EEG activity?
It’s not. Read again.
Or you could read the entirety of the first comment in this thread and see how it was not saying that. Notice the part that begins, “However, I believe there is an important difference to chatbots…”
LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).
Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that’s bullshit, because LLMs just aren’t capable of doing any of these things in a meaningful way.
chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it’s only when you do it for yourself that you discover what you really know and what you don’t. It’s the difference between learning what a square root actually is and how to press the square root button on the calculator. It’s the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn’t all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn’t outright make a bunch of stuff up.
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?