Honestly, all of this is really interesting. It’s a whole side of humanity that I very much do NOT think about or follow. I previously spent the last decade much, much, too busy stomping through the forest, so I really didn’t follow anything during that time. A new game or phone came out? sure, cool, I might look that up. When I finally emerged from the fens, sodden and fly-bitten, I was very much out of the loop, despite the algorithm trying to cram articles about NFTs, crypto etc., down my throat. I actually tend to avoid tech stuff because it’s too much of a learning curve at this point. I get the fundamentals, but beyond that I don’t dig in.
I agree with you on the bubble - it depends on the size. I guess my original take is how could it actually get bigger than it is? I just don’t see how it can scale beyond begin in phones or used in basic data analysis/like another google. The AIs can definitely get more advanced, sure, but with that should come some sort of efficiency. We’re also seemingly on the cusp of quantum computing, which I imagine would reduce power requirements.
Meanwhile (and not to detract from the environmental concerns AI could pose) we have very, very real and very, very large environmental concerns that need addressing. Millions of cubic metres of sulphur are sitting in stockpiles in northern Alberta, and threatening the Athabasca river. That’s not even close to the top of the list of things we need to focus on before we can get out in front of the damage AI can cause.
The AIs can definitely get more advanced, sure, but with that should come some sort of efficiency.
This is what AI researchers/pundits believed until roughly 2020, when it was discovered you could brute force your way to have more advanced AIs (so-called “scaling laws”) just by massively scaling up existing algorithms. That’s essentially what tech companies have been doing ever since. Nobody knows what the limit on this is going to be, but as far as I know nobody has any good evidence to suggest that we’re near the limit of what’s going to be possible with scaling.
We’re also seemingly on the cusp of quantum computing, which I imagine would reduce power requirements.
Quantum computing is not faster than regular computers. Quantum computing has efficiency advantages for some particular algorithms, such as breaking certain types of encryption. As far as I’m aware, nobody is really looking to replace computers with quantum computers in general. Even if they did, I don’t think anyone has thought of a way to accelerate AI using quantum computing. Even if there were a way to, it would presumably require quantum computers like, 15 orders of magnitude more powerful than the ones we have today.
We have very, very real and very, very large environmental concerns that need addressing.
Yeah. I don’t think AI is really at the highest level of concern for environmental impact, especially since it is looking plausible it will lead to investing in nuclear power, which would be a net positive IMO. (Coolant could still be an issue though.)
How do they brute force their way to a better algorithm? Just trial and error? How do they check outcomes to determine that their new model is good?
I don’t expect you to answer those musings - you’ve been more than patient with me.
Honestly, I’m a tree hugger, and the fact that we aren’t going for nuclear simply because of smear campaigns and changes in public opinion is insanity. We already treat some mining wastes in perpetuity, or plan to have them entombed for the rest of time - how is nuclear waste any different?
It’s not brute-force to a better algorithm per se. It’s the same algorithm, exactly as “stupid,” just with more force (more numerous and powerful GPUs) running it.
Three are benchmarks to check if the model is “good” – for instance, how well the model does on standardized tests similar to SATs (researchers are very careful to ensure that the questions do not appear on the internet anywhere, so that the model can’t just memorize the answers.)
Honestly, all of this is really interesting. It’s a whole side of humanity that I very much do NOT think about or follow. I previously spent the last decade much, much, too busy stomping through the forest, so I really didn’t follow anything during that time. A new game or phone came out? sure, cool, I might look that up. When I finally emerged from the fens, sodden and fly-bitten, I was very much out of the loop, despite the algorithm trying to cram articles about NFTs, crypto etc., down my throat. I actually tend to avoid tech stuff because it’s too much of a learning curve at this point. I get the fundamentals, but beyond that I don’t dig in.
I agree with you on the bubble - it depends on the size. I guess my original take is how could it actually get bigger than it is? I just don’t see how it can scale beyond begin in phones or used in basic data analysis/like another google. The AIs can definitely get more advanced, sure, but with that should come some sort of efficiency. We’re also seemingly on the cusp of quantum computing, which I imagine would reduce power requirements.
Meanwhile (and not to detract from the environmental concerns AI could pose) we have very, very real and very, very large environmental concerns that need addressing. Millions of cubic metres of sulphur are sitting in stockpiles in northern Alberta, and threatening the Athabasca river. That’s not even close to the top of the list of things we need to focus on before we can get out in front of the damage AI can cause.
We’re in a real mess.
This is what AI researchers/pundits believed until roughly 2020, when it was discovered you could brute force your way to have more advanced AIs (so-called “scaling laws”) just by massively scaling up existing algorithms. That’s essentially what tech companies have been doing ever since. Nobody knows what the limit on this is going to be, but as far as I know nobody has any good evidence to suggest that we’re near the limit of what’s going to be possible with scaling.
Quantum computing is not faster than regular computers. Quantum computing has efficiency advantages for some particular algorithms, such as breaking certain types of encryption. As far as I’m aware, nobody is really looking to replace computers with quantum computers in general. Even if they did, I don’t think anyone has thought of a way to accelerate AI using quantum computing. Even if there were a way to, it would presumably require quantum computers like, 15 orders of magnitude more powerful than the ones we have today.
Yeah. I don’t think AI is really at the highest level of concern for environmental impact, especially since it is looking plausible it will lead to investing in nuclear power, which would be a net positive IMO. (Coolant could still be an issue though.)
How do they brute force their way to a better algorithm? Just trial and error? How do they check outcomes to determine that their new model is good?
I don’t expect you to answer those musings - you’ve been more than patient with me.
Honestly, I’m a tree hugger, and the fact that we aren’t going for nuclear simply because of smear campaigns and changes in public opinion is insanity. We already treat some mining wastes in perpetuity, or plan to have them entombed for the rest of time - how is nuclear waste any different?
It’s not brute-force to a better algorithm per se. It’s the same algorithm, exactly as “stupid,” just with more force (more numerous and powerful GPUs) running it.
Three are benchmarks to check if the model is “good” – for instance, how well the model does on standardized tests similar to SATs (researchers are very careful to ensure that the questions do not appear on the internet anywhere, so that the model can’t just memorize the answers.)