In a new study, MIT Media Lab measured 55 people over four months on how well they could write an essay — either with ChatGPT, with a search engine, or just their unassisted brain. The researchers …
chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
You need an expertise to tell where the error is happening or even notice it at all.
The main argument against LLMs is that you don’t get a systemic knowledge how e.g. the code works. Moreso, as long as it works right, you aren’t motivated to look into it and understand just how it does so. That renders you unable to see and fix mistakes.
If you are an experienced task-doer, yes, a quick glance can help you with that. But if you start out with LLM vibe-coding, you jump over a couple of steps, and when confronted with an error, you need to apply more force (learning things you skipped) to make it right.
Personally, I grew seriously ill when we had math classes concerning sin\cos\tg\ctg basics and going forward I felt I just don’t get it at all as the class marched towards further subjects based on them. It costed me way more labor to get on the same page than if I’ve never been absent. The same could’ve happened, I believe, if I could upload my homework into LLM, only to face exams where I can’t use it anymore, or if I came upon real life applications - and surprisingly I did.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it’s only when you do it for yourself that you discover what you really know and what you don’t. It’s the difference between learning what a square root actually is and how to press the square root button on the calculator. It’s the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn’t all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn’t outright make a bunch of stuff up.
Or there are no skills that are being lost: because of low reliability, the operator still needs to check output data by using those very same skills. LLM just helps with routine.
You need an expertise to tell where the error is happening or even notice it at all.
The main argument against LLMs is that you don’t get a systemic knowledge how e.g. the code works. Moreso, as long as it works right, you aren’t motivated to look into it and understand just how it does so. That renders you unable to see and fix mistakes.
If you are an experienced task-doer, yes, a quick glance can help you with that. But if you start out with LLM vibe-coding, you jump over a couple of steps, and when confronted with an error, you need to apply more force (learning things you skipped) to make it right.
Personally, I grew seriously ill when we had math classes concerning sin\cos\tg\ctg basics and going forward I felt I just don’t get it at all as the class marched towards further subjects based on them. It costed me way more labor to get on the same page than if I’ve never been absent. The same could’ve happened, I believe, if I could upload my homework into LLM, only to face exams where I can’t use it anymore, or if I came upon real life applications - and surprisingly I did.
Thinking through a problem yourself, or taking an idea and putting it into words, is like exercise for the brain. You may think you understand a thing from reading or hearing about it, but it’s only when you do it for yourself that you discover what you really know and what you don’t. It’s the difference between learning what a square root actually is and how to press the square root button on the calculator. It’s the difference between learning to drive and learning to turn on self-driving mode. Even if the outcome is the same, the learning experience is day and night.
Once you understand a concept well enough, then using an LLM to get some busy work done or just get a starting point that you can improve isn’t all that bad, much like using a calculator after learning pen-and-paper division, but trying to use one while learning is almost certain to hurt your understanding, even if the LLM doesn’t outright make a bunch of stuff up.
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?