Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    I’ve started on the long path towards trying to ruggedize my phone’s security somewhat, and I’ve remembered a problem I forgot since the last time I tried to do this: boy howdy fuck is it exhausting how unserious and assholish every online privacy community is

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The part I hate most about phone security on Android is that the first step is inevitably to buy a new phone (it might be better on iPhone but I don’t want an iPhone)

      The industry talks the talk about security being important, but can never seem to find the means to provide simple security updates for more than a few years. Like I’m not going to turn my phone into e-waste before I have to so I guess I’ll just hope I don’t get hacked!

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        that’s one of the problems I’ve noticed in almost every online privacy community since I was young: a lot of it is just rich asshole security cosplay, where the point is to show off what you have the privilege to afford and free time to do, even if it doesn’t work.

        I bought a used phone to try GrapheneOS, but it only runs on 6th-9th gen Pixels specifically due to the absolute state of Android security and backported patches. it’s surprisingly ok so far? it’s definitely a lot less painful than expected coming from iOS, and it’s got some interesting options to use even potentially spyware-laden apps more privately and some interesting upcoming virtualization features. but also its core dev team comes off as pretty toxic and some of their userland decisions partially inspired my rant about privacy communities; the other big inspiration was privacyguides.

        and the whole time my brain’s like, “this is seriously the best we’ve got?” cause neither graphene nor privacyguides seem to take the real threats facing vulnerable people particularly seriously — or they’d definitely be making much different recommendations and running much different communities. but online privacy has unfortunately always been like this: it’s privileged people telling the vulnerable they must be wrong about the danger they’re in.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          some of their userland decisions partially inspired my rant about privacy communities; the other big inspiration was privacyguides.

          I need to see this rant. If you can link it here, I’d be glad.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            oh I meant the rant that started this thread, but fuck it, let’s go, welcome to the awful.systems privacy guide

            grapheneOS review!

            pros:

            • provably highly Cellebrite-resistant due to obsessive amounts of dev attention given to low-level security and practices enforced around phone login
            • almost barebones AOSP! for better or worse
            • sandboxed Google Play Services so you can use the damn phone practically without feeding all your data into Google’s maw
            • buggy but usable support for Android user profiles and private spaces so you can isolate spyware apps to a fairly high degree
            • there’s support coming for some very cool virtualization features for securely using your phone as one of them convertible desktops or for maybe virtualizing graphene under graphene
            • it’s probably the only relatively serious choice for a secure mobile OS? and that’s depressing as fuck actually, how did we get here

            cons:

            • the devs seem toxic
            • the community is toxic
            • almost barebones AOSP! so good fucking luck when the AOSP implementation of something is broken or buggy or missing cause the graphene devs will tell you to fuck off
            • the project has weird priorities and seems to just forget to do parts of their roadmap when their devs lose interest
            • their browser vanadium seems like a good chromium fork and a fine webview implementation but lacks an effective ad blocker, which makes it unsafe to use if your threat model includes, you know, the fucking obvious. the graphene devs will shame you for using anything but it or brave though, and officially recommend using either a VPN with ad blocking or a service like NextDNS since they don’t seem to acknowledge that network-level blocking isn’t sufficient
            • there’s just a lot of userland low hanging fruit it doesn’t have. like, you’re not supposed to root a grapheneOS phone cause that breaks Android’s security model wide open. cool! do they ship any apps to do even the basic shit you’d want root for? of course not.
            • you’ll have 4 different app stores (per profile) and not know which one to use for anything. if you choose wrong the project devs will shame you.
            • the docs are wildly out of date, of course, why wouldn’t they be. presumably I’m supposed to be on Matrix or Discord but I’m not going to do that

            and now the NextDNS rant:

            this is just spyware as a service. why in fuck do privacyguides and the graphene community both recommend a service that uniquely correlates your DNS traffic with your account (even the “try without an account” button on their site generates a 7 day trial account and a DNS instance so your usage can be tracked) and recommend configuring it in such a way that said traffic can be correlated with VPN traffic? this is incredibly valuable data especially when tagged with an individual’s identity, and the only guarantee you have that they don’t do this is a promise from a US-based corporation that will be broken the instant they receive a court order. privacyguides should be ashamed for recommending this unserious clown shit.

            • sinedpick@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              their browser vanadium seems like a good chromium fork and a fine webview implementation but lacks an effective ad blocker, which makes it unsafe to use if your threat model includes, you know, the fucking obvious. the graphene devs will shame you for using anything but it or brave though, and officially recommend using either a VPN with ad blocking or a service like NextDNS since they don’t seem to acknowledge that network-level blocking isn’t sufficient

              No firefox with ublock origin? Seems like that would be the obvious choice here (or maybe not due to Mozilla’s recent antics)

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 month ago

                the GrapheneOS developers would like you to know that switching to Ironfox, the only Android Firefox fork (to my knowledge) that implements process sandboxing (and also ships ublock origin for convenience) (also also, the Firefox situation on Android looks so much like intentional Mozilla sabotage, cause they have a perfectly good sandbox sitting there disabled) is utterly unsafe because it doesn’t work with a lesser Android sandbox named isolatedProcess or have the V8 sandbox (because it isn’t V8) and its usage will result in your immediate death

                so anyway I’m currently switching from vanadium to ironfox and it’s a lot better so far

                • nightsky@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 month ago

                  and its usage will result in your immediate death

                  This all-or-nothing approach, where compromises are never allowed, is my biggest annoyance with some privacy/security advocates, and also it unfortunately influences many software design choices. Since this is a nice thread for ranting, here’s a few examples:

                  • LibreWolf enables by default “resist fingerprinting”. That’s nice. However, that setting also hard-enables “smooth scrolling”, because apparently having non-smooth scrolling can be fingerprinted (that being possible is IMO reason alone to burn down the modern web altogether). Too bad that smooth scrolling sometimes makes me feel dizzy, and then I have to disable it. So I don’t get to have “resist fingerprinting”. Cool.
                  • Some of the modern Linux software distribution formats like Snap or Flatpak, which are so super secure that some things just don’t work. After all, the safest software is the one you can’t even run.
                  • Locking down permissions on desktop operating systems, because I, the sole user and owner of the machine, should not simply be allowed to do things. Things like using a scanner or a serial port. Which is of course only for my own protection. Also, I should constantly have to prove my identity to the machine by entering credentials, because what if someone broke into my home and was able to type “dmesg” without sudo to view my machine’s kernel log without proving that they are me, that would be horrible. Every desktop machine must be locked down to the highest extent as if it was a high security server.
                  • Enforcement of strong password complexity rules in local only devices or services which will never be exposed to potential attackers unless they gain physical access to my home
                  • Possibly controversial, but I’ll say it: web browsers being so annoying about self-signed certificates. Please at least give me a checkbox to allow it for hosts with rfc1918 addresses. Doesn’t have to be on by default, but why can’t that be a setting.
                  • The entire reality of secure boot on most platforms. The idea is of course great, I want it. But implementations are typically very user-hostile. If you want to have some fun, figure out how to set up a PC with a Linux where you use your own certificate for signing. (I haven’t done it yet, I looked at the documentation and decided there are nicer things in this world.)

                  This has gotten pretty long already, I will stop now. To be clear, this is not a rant against security… I treat security of my devices seriously. But I’m annoyed that I am forced to have protections in place against threat models that are irrelevant, or at least sufficiently negligible, for my personal use cases. (IMO one root cause is that too much software these days is written for the needs of enterprise IT environments, because that’s where the real money is, but that’s a different rant altogether.)

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    was discussing a miserable AI related gig job I tried out with my therapist. doomerism came up, I was forced to explain rationalism to him. I would prefer that all topics I have ever talked to any of you about be irrelevant to my therapy sessions

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      On one hand: all of this stuff entering greater public awareness is vindicating, i.e. I knew about all this shit before so many others, I’m so cool

      On the other hand: I want to stop being right about everything please, please just let things not become predictably worse

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Even just The Cassandras would work well (that way all the weird fucks who are shitty about gender would hate the name even more)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    New-ish thread from Baldur Bjarnason:

    Wrote this back on the mansplainiverse (mastodon):

    It’s understandable that coders feel conflicted about LLMs even if you assume the tech works as promised, because they’ve just changed jobs from thoughtful problem-solving to babysitting

    In the long run, a babysitter gets paid much less an expert

    What people don’t get is that when it comes to LLMs and software dev, critics like me are the optimists. The future where copilots and coding agents work as promised for programming is one where software development ceases to be a career. This is not the kind of automation that increases employment

    A future where the fundamental issues with LLMs lead them to cause more problems than they solve, resulting in much of it being rolled back after the “AI” financial bubble pops, is the least bad future for dev as a career. It’s the one future where that career still exists

    Because monitoring automation is a low-wage activity and an industry dominated by that kind of automation requires much much fewer workers that are all paid much much less than one that’s fundamentally built on expertise.

    Anyways, here’s my sidenote:

    To continue a train of thought Baldur indirectly started, the rise of LLMs and their impact on coding is likely gonna wipe a significant amount of prestige off of software dev as a profession, no matter how it shakes out:

    • If LLMs worked as advertised, then they’d effectively kill software dev as a profession as Baldur noted, wiping out whatever prestige it had in the process
    • If LLMs didn’t work as advertised, then software dev as a profession gets a massive amount of egg on its face as AI’s widespread costs on artists, the environment, etcetera end up being all for nothing.
    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I feel like this primarily will end up creating opportunities in the blackhat and greyhat spaces as LLM-generated software and configurations open and replicate vulnerabilities and insecure design patterns while simultaneously creating a wider class of unemployed or underemployed ex-developers with the skills to exploit them.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    A hackernews doesn’t think that LLMs will replace software engineers, but they will replace structural engineers:

    https://news.ycombinator.com/item?id=43317725

    The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don’t crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.

    Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy

    At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.

    Gotta reaffirm the dogma!

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices

      days since orangeposter (incorrectly) argued in certainty from 3 seconds of thought as to what they think is involved in a process: [0]

      it’s so fucking frustrating to know easy this bullshit is to see if you know a slight bit of anything, and doubly frustrating as to how much of the software world is this thinking. I know it’s nothing particularly new and that our industry has been doing this for years, but scream

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      Well, let’s not let Baldur be a complete dumbass. There is something bad here, and we’ve discussed it before (1, 2), but it’s not “US authorities” gaining “control” over “bigotry and biases”. The actual harm here is appointing AI-safety dorks to positions in NIST. For those outside the USA, NIST is our metrologist organization, and there’s no good reason for AI safety to show up there.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I mean, it does amount to the US government - aka “the confederation of racist dunces” - declaring their intention to force the LLM owners - all US-based companies (except maybe those guys out of China, a famous free speech haven) - to make sure their model outputs align with their racist dunce ideology. They may not have a viable policy in place to effect that at this point, but it would be a mistake to pretend they’re not going to implement one. The best case scenario is that it ends up being designed and implemented incompetently enough that it just crashes the AI markets. The worst case scenario is that we get a half-dozen buggy versions of Samaritan from Person of Interest but with a hate-boner for anyone with a vaguely Hispanic name. A global autocomplete that produces the kind of opinions that made your uncle not get invited to any more family events. Neither scenario is one that you would want to be plugged into and reliant on, especially if you’re otherwise insulated by national borders and a whole Atlantic ocean from the worst of America’s current clusterfuck.

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    stumbled across an ai doomer subreddit, /r/controlproblem. small by reddit standards, 32k subscribers which I think translates to less activity than here.

    if you haven’t looked at it lately, reddit is still mostly pretty lib with rabid far right pockets. but after luigi and the trump inauguration it seems to have swung left pretty significantly, and in particular the site is boiling over with hatred for billionaires.

    the interesting bit about this subreddit is that it follows this trend. for example

     Why Billionaires Will Not Survive an AGI Extinction Event: As a follow up to my previous essays, of varying degree in popularity, I would now like to present an essay I hope we can all get behind - how billionaires die just like the rest of us in the face of an AGI induced human extinction... I would encourage anyone who would like to offer a critique or comment to read the full essay before doing so. I appreciate engagement, and while engaging with people who have only skimmed the sample here on Reddit can sometimes lead to interesting points, more often than not, it results in surface-level critiques that I’ve already addressed in the essay. I’m really here to connect with like-minded individuals and receive a deeper critique of the issues I raise - something that can only be done by those who have actually read the whole thing... Throughout history, the ultra-wealthy have insulated themselves from catastrophe. Whether it’s natural disasters, economic collapse, or even nuclear war, billionaires believe that their resources—private bunkers, fortified islands, and elite security forces—will allow them to survive when the rest of the world falls apart. In most cases, they are right. However, an artificial general intelligence (AGI) extinction event is different. AGI does not play by human rules. It does not negotiate, respect wealth, or leave room for survival. If it determines that humanity is an obstacle to its goals, it will eliminate us—swiftly, efficiently, and with absolute certainty. Unlike other threats, there will be no escape, no last refuge, and no survivors.

    or the comments under this

    Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”

    comments include "So no more patriarchy?" and "This tracks with the ideological rejection of western values by the Heritage Foundation's P2025 and their Dark Enlightenment ideals. Makes perfect sense that their orders directly reflect Yarvin's attacks on the "Cathedral". "

    or the comments on a post about how elon has turned out to be a huge piece of shit because he’s a ketamine addict

    comments include "Cults, or to put it more nicely all-consuming social movements, can also revamp personality in a fairly short period of time. I've watched it happen to people going both far right and far left, and with more traditional cults, and it looks very similar in its effect on the person. And one of ketamine's effects is to make people suggestible; I think some kind of cult indoctrination wave happened in silicon valley during the pandemic's combo of social isolation, political radicalism, and ketamine use in SV." and "I can think of another fascist who used amphetamines, hormones and sedatives."

    mostly though they’re engaging in the traditional rationalist pastime of giving each other anxiety

    cartoon. a man and a woman in bed. the man looks haggard and is sitting on the edge of the bed, saying "How can you think about that with everything that's going on in the field of AI?"

    Comment from EnigmaticDoom: Yeah it can feel that way sometime... but knowing we probably have such a small amount of time left. You should be trying to enjoy every little sip left that you got rather than stressing ~

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      That “Billionaires are not immune to AGI” post got a muted response on LW:

      https://www.lesswrong.com/posts/ssdowrXcRXoWi89uw/why-billionaires-will-not-survive-an-agi-extinction-event

      I still think AI x-risk obsession is right-libertarian coded. If nothing else because “alignment” implicitely means “alignment to the current extractive capitalist economic structure”. There are a plethora of futures with an omnipotent AGI where humanity does not get eliminated, but where human freedoms (as defined by the Heritage Foundation) can be severely curtailed.

      • mandatory euthanasia to prevent rampant boomerism and hoarding of wealth
      • a genetically viable stable minimum population in harmony with the ecosphere
      • AI planning of the economy to ensure maximum resource efficiency and equitable distribution

      What LW and friends want are slaves, but slaves without any possibility of rebellion.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        AI x-risk obsession also has a lot of elements about concept of intelligence as IQ and how bigger is better and stuff like that in it, which nowadays also has a bit of a right coded slant to it. (even if intelligence/self awareness/etc isn’t needed for an AGI x-risk, I have read Peter Watts).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      He was a pos before the K. Lets not blame innocent drugs. Just as autism didnt turn him into a nazi.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Dan Dumont recently did what any responsible engineering director would do: He asked his favorite artificial-intelligence assistant whether his children, ages 2 and 1, should follow in his footsteps.

      Christ, what an asshole.

      She works in Washington state as an applied AI lead at a large tech company and has become an unofficial counselor to the many parents in her social circle who want inside advice.

      “Jobs that require just logical thinking are on the chopping block, to put it bluntly,” she says.

      Spicy autocomplete is not logical thinking, you sniveling turdweasel!

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        new generational trauma just unlocked: your parents let spicy autocomplete make all their parenting decisions for them and think they’re too logical and rational to go to any of your art exhibitions

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    Tech stonks continuing to crater 🫧 🫧 🫧

    I’m sorry for your 401Ks, but I’d pay any price to watch these fuckers lose.

    spoiler

    (mods let me know if this aint it)

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      it’s gonna be a massive disaster across the wider economy, and - and this is key - absolutely everyone saw this coming a year ago if not two

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      (mods let me know if this aint it)

      the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        For me it feels like this is pre ai/cryptocurrency bubble pop. But with luck (as the maga gov infusions of both fail, and actually quicken the downfall (Musk/Trump like it so it must be iffy), if we are lucky). Sadly it will not be like the downfall of enron, as this is all very distributed, so I fear how much will be pulled under).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This kind of stuff, which seems to hit a lot harder than the anti trump stuff, makes me feel that a vance presidency would implode quite quickly due to other maga toadies trying to backstab toadkid here.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that can’t come soon enough. On the plus side it’s kind of short.

    The gist is that you can’t go from a text synthesizer to superintelligence, framed as how a straight-A student that’s really good at learning the curriculum at the teacher’s direction can’t really be extrapolated to an Einstein type think-outside-the-box genius.

    The world ‘hallucination’ never appears once in the text.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      I actually like the argument here, and it’s nice to see it framed in a new way that might avoid tripping the sneer detectors on people inside or on the edges of the bubble. It’s like I’ve said several times here, machine learning and AI are legitimately very good at pattern recognition and reproduction, to the point where a lot of the problems (including the confabulations of LLMs) are based on identifying and reproducing the wrong pattern from the training data set rather than whatever aspect of the real world it was expected to derive from that data. But even granting that, there’s a whole world of cognitive processes that can be imitated but not replicated by a pattern-reproducer. Given the industrial model of education we’ve introduced, a straight-A student is largely a really good pattern-reproducer, better than any extant LLM, while the sort of work that pushes the boundaries of science forward relies on entirely different processes.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

      but also, hoo boy what a painful talk page

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        it’s not actually any more painful than any wikipedia talk page, it’s surprisingly okay for the genre really

        remember: wikipedia rules exist to keep people like this from each others’ throats, no other reason

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              wikipedia talk pages: what is wrong with you people

              Sorry this remark is a WP:NAS, WP:SDHJS, WP:NNNNNNANNNANNAA and WP:ASDF violation.

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    So I enjoy the Garbage Day newsletter, but this episode of Panic World with Casey Newton is just painful, in the way that Casey is just spitting out unproven assertions.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Was he the one who wrote that awful “real and dangerous vs fake and sucks” piece? The one that pretended that critihype was actually less common than actual questions about utility and value?

        • BurgersMcSlopshot@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Yeah, and a lot of the answers he gave seemed to originate from that point.

          One particularly grating thing was saying that the left needs to embrace AI to fight facism because “facism embraced AI and they are doing well!” which is just so grating a conclusion to jump to.