

I think I figured it out.
He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.
I think I figured it out.
He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.
I am also presuming this is about purely non-fiction technical books
He has Dune on his list of worlds to live in, though…
edit: I know. he fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed.
Naturally, that system broke down (via capitalists grabbing the expensive fusion power plants for their own purposes)
This is kind of what I have to give to Niven. The guy is a libertarian, but he would follow his story all the way into such results. And his series where organs are being harvested for minor crimes? It completely flew over my head that he was trying to criticize taxes, and not, say, republican tough-on-crime, mass incarceration, and for profit prisons. Because he followed the logic of the story and it aligned naturally with its real life counterpart, the for profit prison system, even if he wanted to make some sort of completely insane anti tax argument where taxing rich people is like harvesting organs or something.
On the other hand, much better regarded Heinlein, also a libertarian, would write up a moon base that exports organic carbon and where you have to pay for oxygen to convert to CO2. Just because he wanted to make a story inside of which “having to pay for air to breathe” works fine.
Maybe he didn’t read Dune he just had AI summarize it.
Yolo charging mode on a phone, disable the battery overheating sensor and the current limiter.
I suspect that they added yolo mode because without it this thing is too useless.
There is an implicit claim in the red button that it was worth including.
It is like Google’s AI overviews. There can not be a sufficient disclaimer because the overview being on the top of Google search implies a level of usefulness which it does not meet, not even in the “evil plan to make more money briefly” way.
Edit: my analogy to AI disclaimers is using “this device uses nuclei known to the state of California to…” in place of “drop and run”.
Jesus Christ on a stick, thats some trice cursed shit.
Maybe susceptibility runs in families, culturally. Religion does, for one thing.
I think this may also be a specific low-level exploit, whereby humans are already biased to mentally “model” anything as having an agency (see all the sentient gods that humans invented for natural phenomena).
I was talking to an AI booster (ewww) in another place and I think they really are predominantly laymen brain fried by this shit. That particular one posted a convo where out of 4 arithmetic operations, 2 were “12042342 can be written as 120423 + 19, and 43542341 as 435423 + 18” combined with AI word-salad, and he was expecting that this would be convincing.
It’s not that this particular person thinks its genius, he thinks that it is not a mere computer, and the way it is completely shit at math only serves to prove it to them that it is not a mere computer.
edit: And of course they care not for any mechanistic explanations, because all of those imply LLMs are not sentient, and they believe LLMs are sentient. The “this isn’t it but one day some very different system will” counter argument doesn’t help either.
Yeah I think it is almost undeniable chatbots trigger some low level brain thing. Eliza has 27% Turing Test pass rate. And long before that, humans attributed weather and random events to sentient gods.
This makes me think of Langford’s original BLIT short story.
And also of rove beetles that parasitize ant hives. These bugs are not ants but they pass the Turing test for ants - they tap the antennae with an ant and the handshake is correct and they are identified as ants from this colony and not unrelated bugs or ants from another colony.
I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that “its just the rotor blades chopping sunlight” when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.
Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.
And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.
edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.
Yudkowsky is a dumbass layman posing as an expert, and he’s playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.
It would have to be more than just river crossings, yeah.
Although I’m also dubious that their LLM is good enough for universal river crossing puzzle solving using a tool. It’s not that simple, the constraints have to be translated into the format that the tool understands, and the answer translated back. I got told that o3 solves my river crossing variant but the chat log they gave had incorrect code being run and then a correct answer magically appearing, so I think it wasn’t anything quite as general as that.
I’d just write the list then assign randomly. Or perhaps pseudorandomly like sort by hash and then split in two.
One problem is that it is hard to come up with 20 or more completely unrelated puzzles.
Although I don’t think we need a large number for statistical significance here, if it’s like 8/10 solved in the cheating set and 2/10 in the hold back set.
Yeah any time its regurgitating an IMO problem it’s a proof it’salmost superhuman, but any time it actually faces a puzzle with unknown answer, this is not what it is for.
Further support for the memorization claim: I posted examples of novel river crossing puzzles where LLMs completely fail (on this forum).
Note that Apple’s actors / agents river crossing is a well known “jealous husbands” variant, which you can ask a chatbot to explain to you. It gladly explains, even as it can’t follow its own explanation (since of course it isn’t its own explanation but a plagiarized one, even if changes words).
edit: https://awful.systems/post/4027490 and earlier https://awful.systems/post/1769506
I think what I need to do is to write up a bunch of puzzles, assign them randomly to 2 sets, and test & post one set, while holding back on the second set (not even testing it on any online chatbots). Then in a year or two see how much the set that’s public improves, vs the one that’s held back.
making LLMs not say racist shit
That is so 2024. The new big thing is making LLMs say racist shit.
Can’t be assed to read the bs but sometimes the use after free only happens in some rarely executed code path, or only when one branch is executed then later another branch. So you still may need fuzzing to trigger use after free for Valgrind to detect.
Chatbots ate my cult.
I swear I’m gonna plug an LLM into a rather traditional solver I’m writing. I may tuck deep into the paper a point how it’s quite slow to use an LLM to mutate solutions in a genetic algorithm or a swarm solver. And in any case non LLM would be default.
Normally I wouldn’t sink that low but I got mouths to feed, and frankly, fuck it, they can persist in this madness for much longer than I can stay solvent.
This is as if there was a mass delusion that a pseudorandom number generator can serve as an oracle, predicting the future. Doing any kind of Monte Carlo simulation of something like weather in that world would of course confirm all the dumb shit.
I wonder what’s gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.
Incels then: Zuckerberg creates a hot-or-not clone with stolen student data, gets away with it, becomes a billionaire.
Incels now: chatgpt, what’s her BMI.