Why isn’t there anything in place to control the Chatroom from users who engage in illegal or inappropriate subjects? I understand censorship but I believe there’s a fine line between those who keep talking about it and their role in creating or engaging in such activity. Everyday almost hourly there’s talk/roleplay/jokes about CP and racism. I’d say this is far from normal behavior. Are these users traceable should anything ever happen? Could child abuse resources be made publicly available on the Perchance website?
I’m assuming that you’re talking about this as a non-generator-author. If you are an author you can ban people (see comments plugin page).
If you are not the author of the generator, it should ideally only take a few people in the chat to report them and they’ll be banned (for an amount of time that depends on severity, ‘recidivism’, etc.).
That said, improving the comments plugin moderation system is pretty high on my todo list. Probably not in the next few weeks, but ideally within the next couple of months. I think it can be significantly improved by rewriting some of the core detection logic and upgrading the AI model that it uses.
I noticed you outright block vpns in the chatrooms. That certainly helps. Systems I’ve seen that rely on voting can certainly be abused too, of course. I am happy to hear that there are already volunteer mods on here, and yes I was speaking from experience in the community management world, not as an author.
Thank you for your great project. I didn’t mean to come off as overly pessimistic. Some scars… do not heal…
I’m talking about this page specifically https://perchance.org/ai-text-to-image-generator
There’s a high number of users there and a high number of inappropriate content being generated and engagement within the chatroom… Whoever the author is of that page should probably pay attention to that kind of stuff I think.
🙄
🙄
🙄
I was a volunteer moderator on a pretty popular game community for about 6 years, where the active userbase sometimes exceeded 10,000 online at any one time. We had plenty of 8ch garbage people, coordinated spamming from nazi IRC channels, Trump assholes just being Trump assholes, and sketchy CP pervs skulking around.
I am as annoyed by moderation as most people are, but people aren’t going to moderate their own chatrooms and communities reliably, if at all.
If you’re thinking of a scripted or AI moderation system, good luck, though who knows. Maybe AI moderators might work in the near future, though they’d be pretty fun targets to troll. Word bans, blacklists and offensive syntax filters are a nightmare to maintain, that’s the way channels have been moderated for the past 30 years.
The sad truth is that safe places require human moderators who care about the community to spend time protecting it. It’s a job nobody wants to do and burnout is a bitch. If there were a few trustworthy people who have a lot of history with perchance and the community who could be online, having it set up so that users could just ping them reports of abuse/scams/CP and they would have the power to intervene on anyone’s thing that was opened to the public.
That’s the only way. Spending time trying to code a way around this is going to suck your brains out.
We have that!!!
It’s hard to show how effective chatroom moderations is but this is the raw gallery.