• 0 Posts
  • 251 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • Your bonus points link is even dumber than you’re suggesting. The first half of the tweet:

    I don’t want to live in the world of “Camp Of The Saints”.

    I don’t want to live in the world of “Atlas Shrugged”.

    I don’t want to live in the world of “The GULag Archipelago”.

    I don’t want to live in the world of “Nineteen Eighty-Four”.

    I don’t want to live in the “Brave New World”.

    I want to live in the world of Hyperion, Ringworld, Foundation, and Dune

    I don’t want bad things! I want good-ish things!

    Also I’ve never read Ringworld or Hyperion but the other two stories span literal millennia and show wildly different societies over that period. Hell, showcasing that development is the entire first set of Foundation stories. Just… You can absolutely tell this sonofabitch doesn’t actually read.


  • I mean you could make an actual evo psych argument about the importance of being able to model the behavior of other people in order to function in a social world. But I think part of the problem is also in the language at this point. Like, anthropomorphizing computers has always been part of how we interact with them. Churning through an algorithm means it’s “thinking”, an unexpected shutdown means it “died”, when it sends signals through a network interface it’s “talking” and so on. But these GenAI chatbots (chatbots in general, really, but it’s gotten worse as their ability to imitate conversation has improved) are too easy to assign actual agency and personhood to, and it would be really useful to have a similarly convenient way of talking about what they do and how they do it without that baggage.







  • I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like “what does it mean to be human?” in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.




  • I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.

    Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”









  • This is a good example of something that I feel like I need to drill at a bit more. I’m pretty sure that this isn’t an unexpected behavior or an overfitting of the training data. Rather, given the niche question of “what time zone does this tiny community use?” one relatively successful article in a satirical paper should have an outsized impact on the statistical patterns surrounding those words, and since as far as the model is concerned there is no referent to check against this kind of thing should be expected to keep coming up when specific topics or phrases come up near each other in relatively novel ways. The smaller number of examples gives each one a larger impact on the overall pattern, so it should be entirely unsurprising that one satirical example “poisons” the output this cleanly.

    Assuming this is the case, I wonder if it’s possible to weaponize it by identifying tokens with low overall reference counts that could be expanded with minimal investment of time. Sort of like Google bombing.