• Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    7 days ago

    Worthless research.

    That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.

    Even if a user called it out, they’d be censored.

    Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.

    • yesman@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 days ago

      accusing others of speaking in bad faith

      You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?

      And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.

      • Sixty@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 days ago

        I won’t tell you.about civiity because

        How could that do anything besides shield the sealions, JAQoffs, and grifters?

        Not shield, but amplify.

        That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.

        ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.

        I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 days ago

          To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

    • gargolito@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 days ago

      Facebook did this over 15 years ago and AFAIK nothing happened to the perpetrators (Cambridge Analytica IIRC.)

    • Zippygutterslug@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      Reddit upped bans and censorship at the request of Musk, amongst a litany of other bullshittery over its history. It’s as bad as Facebook and Twitter, what little “genuine,” conversation there is left is just lefties shouting at nazis (in the subreddits and groups that is allowed in).

  • rooster_butt@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    CMV: this was a good research akin to something like white hat hackers where the point is to find and expose security exploits. What this research did is point out how easy it is to manipulate people in a “debate” forum that doesn’t allow people from pointing out bad behavior. If this is being done by researchers and publishing it. It’s also being done be nefarious actors that will not disclose it.

  • nthavoc@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    This is the final straw. I deleted my Reddit account.

    From the article: Chief Legal Officer Ben Lee responded to the controversy on Monday, writing that the researchers’ actions were “deeply wrong on both a moral and legal level” and a violation of Reddit’s site-wide rules.

    I don’t believe for one moment Reddit admins didn’t know this was going on especially since it involved mining user data to feed AI. Since when does Reddit have a moral compass? Their compass always points north to maximum shareholder value which right now is looking like anything to do with AI.

  • throwawayacc0430@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    According to the subreddit’s moderators, the AI took on numerous different identities in comments during the course of the experiment, including a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.”

    You don’t need an LLM for that, you got Dean Browning with his xitter alts

  • BossDj@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 days ago

    What they should do is convince a smaller subsection of reddit users to break off to a new site, maybe entice them with promises of a FOSS platform. Maybe a handful of real people and all the rest LLM bots. They’ll never know

  • Neuromorph@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Good i spent at least the last 3 years on reddit making asinine comments, phrases, and punctuation to throw off any AI botS

  • Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 days ago

    With all the bots on the site why complain about these ones?

    Edit: auto$#&"$correct

  • Cocopanda@futurology.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 days ago

    I mean. Reddit was botted to death after the Jon Stewart event in DC. People and corporations realized how powerful Reddit was. Sucks that the site didn’t try to stop it. Now Ai just makes it easier.

      • glitchdx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I don’t think lemmy is big enough to be “next”, but this is still a valid concern.

      • ProdigalFrog@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        At least here we have Fediseer to vet instances, and the ability to vet each sign-ups.

        I think eventually when we’re more targeted, we’ll have to circle the wagons so to speak, and only limit communications to more carefully moderated instances that root out the bots.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Reddit: “Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!”

    • Zenoctate@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      They literally have some AI thing called “answers” which is shitty practice of pushing AI by reddit