- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
That’s why I bake my cake at 2608°C for ~1,8 minutes, it just works™
Project Manager here, and where I’m from it’s common knowledge that 9 women can have a baby in a month .
Or!—hear me out—one woman whose 8 co-gestators were just laid off by someone who doesn’t understand what their job was
Burnt crust full of liquid cake. yum!
Cake brulee
It’s more accurate than you think, because brulée literally means burnt
“Work 50% longer weeks so you can make something that’ll both make me richer AND cost you your jobs!” is not the motivational speech he thinks it is.
I don’t even know what AGI is, and I read the headline as “rich disconnected from reality asshole”.
Turns out I was right.
Wait, are these AI boosters bragging about how close they are to building God the torture that Roko’s Basilisk is inflicting on us all?
lol, because if they work 40-hour weeks they might create it a couple of weeks later
Or, worse, they might actually have to hire enough people to actually do the job. Why hire 100 people with good work life balance, when you can hire 60 people that aren’t allowed to have lives or families.
60
peopleworkers that aren’t allowed to have lives or familiesI mean, that’s what the AI will be for…
Yeah, suddenly they’ll go from 60 hour work weeks to 0 if the AI proponents are to be believed (which you shouldn’t).
For real – ultimately it’s the dream of every billionaire to have a servile AI at their beck and call, while the rest of us can eat rocks and roam the wasteland fighting over gasoline.
He can fuck all the way off.
globally
AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.
I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.
pretending LLMs are AI
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
If a basic chess engine is AI then bubble sort is too
It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.
Many chess engines run on deterministic algos as well
Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.
Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.
A chess engine decides nothing. It understands nothing. It’s just an algorithm.
Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s
How’s insulting the people respectfully disagreeing with you working out so far? That was completely uncalled for.
“Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.
There are at least three of us.
I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.
That undersells them slightly.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.
They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
LLM’s can now generate answers. Watch this:
My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.
And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.
Just for information: We know, from multiple studies, that working more than 40 hours a week for longer periods of time is extremly unhealthy for you. A week has 24*7 = 168 hours and you should sleep 8 hours. That are 56 hours and if you’re working 60 hours, that leaves you with 52 hours or 7,5 hours per day for stuff like “commuting to work”, “buying groceries”, “brushing your teeth” , “family”, “friends”, “sport” or “this important appointment at the dentist”.
And that 7,5 hours are without a weekend. This will kill you. You might be younger and feel strong, but this will kill you.
Not to mention that it doesn’t yield higher output. So it’s stupid on every level.
Yeah, that is also a factor. You can’t expect good work from somebody who has been working for 60 hours for years without having a vacation.
And if you want to have two weekends, 60 hours in 5 days is 12 hours of work a day, minus 8 hours for sleep you get 4 hours, minus ~2 hours commute you get 2 hours, and the rest is basic cooking and eating. This leaves 0 hours for anything else, including rest or even any other duties that you’ll end up resolving throughout the weekends. This will absolutely kill you in the long run.
I remember hearing about somewhere - alphabet or meta or something like that - that basically provided adult crèche facilities for the employees. Way beyond just food - On-site nap rooms. Washing machines. Showers. The works. All to enable just a super unhealthy attitude towards work. Thinking about how much that must’ve affected anyone going there straight after uni when they should have been leaning how to look after themselves makes me shudder with cringe
The plantations are quite comfortable these days…
This is almost too depressing to be funny.
If it’s within reach of a 60 hour week then it’s within reach of a 30 hour week.
This LLM copycat bullshit is never going to be it though. It’s not thinking, it’s looking up the answers at the back of the book.
Is there any actual evidence that they are getting closer to AGI? It seems ridiculous to think that this LLM parrot bullshit is getting there.
Yup, hire 20-30% more people and have them work 30 hours. That’s fewer total hours worked, but they’re higher quality hours, so you should get more from less.
Who gives a fuck what Sergey brin thinks
If it can be reached in 60 hour work weeks it can be reached in 40, but nah mfs should rush to get themselves replaced
If it can be reached in 60 hour work weeks, then it can be reached in 40 hour work weeks be hiring a second person.
If management isn’t willing to put the effort in of hiring the required staff, why would I want to work the job of 2 people for 1 persons pay?
IT project management doesn’t work that way, but it doesn’t matter much. 60 hour work weeks wouldn’t help, either.
I’m really getting sick and tired of these rich fuckers saying shit like this.
-
we are no where close to AGI given this current technology.
-
working 50% longer is not going to make a bit of difference for AGI
-
and even if it would matter, hire 50% more people
The only thing this is going to accomplish is likely make him wealthier. So fuck him.
Or option 4) stay as you are and you will just acheive it in due time rather than in a 50% shorter timeframe?
Edit: 25% shorter? I dont know, maths isnt my strong suit and im drunk.
-
If you need 50% more person-hours, hire 50% more people.
Instructions unclear, fired 50% of people
What if they just work 30 hour weeks for twice as many weeks?
More like 10-20% more weeks. It turns out people get less productive the more hours they work.
If it’s in reach working 60 hour weeks, it’s also on reach working 40 hour weeks, it will just take 1/3rd longer. ;)
Let’s be real, it’ll probably happen faster on 40 hour work weeks than 60.
lots of evidence starting to point to it’d be quicker in 32 hour weeks than 40
Thought this was an Onion article!
Hey plebs! I demand you work 50% more to develop AGI so that I can replace you with robots and fire all of you and make myself a double plus plutocrat! Also, I want to buy an island, small city, Bunker, Spaceship, And/Or something.