Why isn’t there anything in place to control the Chatroom from users who engage in illegal or inappropriate subjects? I understand censorship but I believe there’s a fine line between those who keep talking about it and their role in creating or engaging in such activity. Everyday almost hourly there’s talk/roleplay/jokes about CP and racism. I’d say this is far from normal behavior. Are these users traceable should anything ever happen? Could child abuse resources be made publicly available on the Perchance website?
I’m assuming that you’re talking about this as a non-generator-author. If you are an author you can ban people (see comments plugin page).
If you are not the author of the generator, it should ideally only take a few people in the chat to report them and they’ll be banned (for an amount of time that depends on severity, ‘recidivism’, etc.).
That said, improving the comments plugin moderation system is pretty high on my todo list. Probably not in the next few weeks, but ideally within the next couple of months. I think it can be significantly improved by rewriting some of the core detection logic and upgrading the AI model that it uses.
I’m talking about this page specifically https://perchance.org/ai-text-to-image-generator
There’s a high number of users there and a high number of inappropriate content being generated and engagement within the chatroom… Whoever the author is of that page should probably pay attention to that kind of stuff I think.
well damn I guess we just eyeroll here whenever someone raises a flag about inappropriate content or hostile users. Let me go touch grass.
I don’t get the joke, why is bring this up a bad thing? What do you think new users are thinking when they stumble across a site with a gallery flooded with “Keep Perchance a secret”, questionable looking content in-between, a chatroom box with people talking about CP, making racist remarks, and hostile comments?
eyeroll? probably not mate. They’d probably look into support resources to report this stuff which is what I thought I was doing. I don’t know how effective the report button is on comments and content, Ive reported quite a bit since I started this thread. I see people in the comments asking who the owner of the generator is, they ask about moderators, I see a few people reinforce using the report and block buttons so I know I’m not alone in feeling the way I do. It’s probably no coincidence I see RudBo advertising their generator in the chatroom where CP is being discussed probably 2 minutes before. So there’s really no reason for you to be eyerolling when you’ve seen the same content being made and discussed. You do you I guess…🙄… I’m new to Perchance so I didn’t know how unhinged this community can be but I’m sorry I’m not here to joke around about this kind of stuff. Already had to deal with this growing up I don’t need to see people fantasize about it on a site not explicitly advertised for that and I don’t need people rolling their eyes at me.
It’s also probably no coincidence that after I brought this up things seemed to have been tweaked on Perchance. It seems NSFW content only appears after a certain number of generated adult content has been made. So right off the bat you’re not greeted with suspicious content though I do see the odd ones appear to get through the filter. The chatroom is still pretty toxic regardless. Once the NSFW content filter is unlocked the amount and type of questionable content still looks to be the same.
To be honest I don’t think I’m going to keep up with this thread so I won’t be responding anymore. I made this Lemmy account specifically to ask about this issue so I won’t delete it in case someone else is concerned. Can’t say this is the place for me though. You guys keep those eyes rolling or whatever. All the best.
🙄
🙄
🙄
I noticed you outright block vpns in the chatrooms. That certainly helps. Systems I’ve seen that rely on voting can certainly be abused too, of course. I am happy to hear that there are already volunteer mods on here, and yes I was speaking from experience in the community management world, not as an author.
Thank you for your great project. I didn’t mean to come off as overly pessimistic. Some scars… do not heal…
I was a volunteer moderator on a pretty popular game community for about 6 years, where the active userbase sometimes exceeded 10,000 online at any one time. We had plenty of 8ch garbage people, coordinated spamming from nazi IRC channels, Trump assholes just being Trump assholes, and sketchy CP pervs skulking around.
I am as annoyed by moderation as most people are, but people aren’t going to moderate their own chatrooms and communities reliably, if at all.
If you’re thinking of a scripted or AI moderation system, good luck, though who knows. Maybe AI moderators might work in the near future, though they’d be pretty fun targets to troll. Word bans, blacklists and offensive syntax filters are a nightmare to maintain, that’s the way channels have been moderated for the past 30 years.
The sad truth is that safe places require human moderators who care about the community to spend time protecting it. It’s a job nobody wants to do and burnout is a bitch. If there were a few trustworthy people who have a lot of history with perchance and the community who could be online, having it set up so that users could just ping them reports of abuse/scams/CP and they would have the power to intervene on anyone’s thing that was opened to the public.
That’s the only way. Spending time trying to code a way around this is going to suck your brains out.
We have that!!!
It’s hard to show how effective chatroom moderations is but this is the raw gallery.