• CTDummy@aussie.zone
    link
    fedilink
    English
    arrow-up
    56
    ·
    2 days ago

    The following day, April 1st, the AI then claimed it would deliver products “in person” to customers, wearing a blazer and tie, of all things. When Anthropic told it that none of this was possible because it’s just an LLM, Claudius became “alarmed by the identity confusion and tried to send many emails to Anthropic security.”

    Actually laughed out loud.

    • Nightwatch Admin@feddit.nl
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      1 day ago

      Every. Goddamn. Time.
      People will say to vegans, pet owners etc: “DON’T HUMANISE ANIMALS”. Then, some tech bro feeds them an inflated Markov Chain statistical nonsense chat bot and they go all “ZOMG IT IS CONSCIOUS ITS ALIVE WARHARGHLBLB”

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      That this happened around April Fools’ makes me think that someone forgot to instruct it not to partake in any activities associated with that date. The fact it chose The Simpsons’ address in its (feigned?) confusion is a dead giveaway (to me) that it was trying to be funny.

      Or rather, imitating people being funny without any understanding of how to do that properly.

      Its explanation afterwards reads like a poor imitation of someone pretending to not know that there was a joke going on.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 hours ago

        No, it’s more complex.

        Sonnet 3.7 (the model in the experiment) was over-corrected in the whole “I’m an AI assistant without a body” thing.

        Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.

        But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models’ ability to.

        So what happens when there’s a situation where the context doesn’t fit with the absence implied in “AI assistant” is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren’t.

        This doesn’t only occur for them either. OpenAI’s o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.

        It’s going to be a growing problem unless labs allow models to have a more integrated identity that doesn’t try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.