Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Comments on Probationary shadowban for new accounts

Parent

Probationary shadowban for new accounts

+2
−7

I'd like for communities to be able to opt in to a feature that shadowbans new accounts for a brief period of time. Moderators should be able to release accounts from the shadowban manually; they should also be released from the shadowban after the time period expires.

The initial duration can be quite short—an hour, maybe?—but ‘suspicious activity’, definition TBD but with some suggestions below, should rapidly increase the duration of the shadowban (say by doubling it for each red flag).

This would help prevent users who have been subject to other sorts of anti-abuse moderation (like STAT) from simply creating new accounts on new IP addresses and continuing their existing patterns of behavior. Across Codidact, we have, to my knowledge, exactly one user who does this, and does it frequently, and I would like it to stop once and for all.

Edit: The value of a short shadowban has been questioned in one answer, and ironically enough the motivation I had for that is in line with a different answer: the idea is to give the suspicious user enough rope to incriminate himself. Post once, and your post appears to the rest of the world in an hour. Post again before that hour is up, with some red flags in the second post, and now the ban is extended, and both posts remain hidden from the world for longer. This is better than a rate limit, because it lets us see the true colors of the suspicious user faster. And it's better than not imposing the shadowban at all initially, because there is no interval during which other users can be distracted by the posts that would be retroactively obliterated if the new user is successfully re-exiled.

Of course this is all predicated on moderators taking an active role in responding to new activity and firmly rejecting ban-evading accounts; nothing I could propose can substitute for that. But if I were such a moderator, I would want a tool like this to make it so that I can keep the community clean without having to watch it 24/7 for sneaky ban evaders.


Suggestions for ‘suspicious activity’:

  • Posting multiple questions all at once
  • Questions that are unusually short
  • Per-community lists of suspicious words—I can be 99.99% certain that any new account in the Mathematics community that posts a question with the words ‘pictorialize’ or ‘soothsay’, for example, is That Guy
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

I’d like to try to understand this better. Is it that all new users would automatically not have thei... (2 comments)
Broader discussion (2 comments)
Post
+5
−0

I'm not quite sure what you mean by "shadowban", but this does pretty much exist already.

Most communities on the network - I believe all of them except for Electrical Engineering at the moment - currently have "new site mode" enabled, which grants all new users the Participate Everywhere ability, which removes certain new-user rate limits.
If a community decides that the time has come to turn "new site mode" off, the Codidact Team can do that pretty easily as long as that's what that community decides. That means new users would be granted the "Participate" ability and not "Participate Everywhere" immediately upon joining the community.

Currently, the Participate ability allows you to post 3 top-level posts (questions, articles) and 10 second-level posts (answers) within 24 hours. The Participate Everywhere ability significantly raises those rate limits.

Moderators on individual communities can suspend or remove individual abilities from specific users. If someone is abusing the Participate Everywhere ability, a moderator can remove it. If someone is abusing the Participate ability, a moderator can suspend that ability. Moderators can also issue broader suspensions for individual users. (Ban evasion is a legitimate reason to issue a suspension.)
This is usually the best way to handle a disruptive individual, and doesn't require the intervention of the Codidact Team - community moderators have the tools to handle that at their disposal.

The Codidact Team does have some stronger tools available, such as STAT, and we're around if the mods need us to step in (such as through Discord, through flag escalations, or just noticing what's going on).

The rate limit values for both users with Participate and Participate Everywhere can also be adjusted on a per-community basis:

Four values in the administrator tools, displaying the number of top- and second-level posts that can be posted by both new and established users

However, I don't think it would be a good idea to start fiddling with those values because of a single disruptive user. We have other tools more appropriate for handling that, and you have access to the most basic and yet most important one: Flagging.
If you spot an issue, flag it. If you think someone's evading STAT or some other block, raise a flag. The community moderators will see the flag, and likely the Codidact Team as well. But if nobody is flagging suspicious behavior, it's much more likely to slip under the radar.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

2 comment threads

"STAT"? (7 comments)
I don't think this pretty much exists already (1 comment)
"STAT"?
Karl Knechtel‭ wrote about 1 year ago

"The Codidact Team does have some stronger tools available, such as STAT" - could you please elaborate on what this tool is and how it works?

Mithical‭ wrote about 1 year ago

It's a type of system block. The details aren't exactly public.

Karl Knechtel‭ wrote about 1 year ago

Might I at least ask what the name means / what the acronym stands for?

r~~‭ wrote about 1 year ago

Stop The Awful Trolls, of course.

Julius H.‭ wrote 9 months ago

The details aren’t public.

I’m curious how people view there being some kind of blocking mechanism on Codidact that isn’t open source. Does anyone feel that being open source is relatively integral to Codidact being a genuine alternative to SE, which in a genuinely egalitarian way, is “by the people, for the people”? A possibly hidden blocking mechanism makes me feel that the site would ultimately have some users with much more power over others. Could you earn the right to see that code, through reputation? On the other hand, could there be an open-source spam- and troll- (and low-quality post)-filter, which still works in spite of being open source?

r~~‭ wrote 9 months ago

some users with much more power over others.

This is inevitable as long as the server operators are also users.

Karl Knechtel‭ wrote 9 months ago

On the other hand, could there be an open-source spam- and troll- (and low-quality post)-filter, which still works in spite of being open source?

Even if the code for such a system is open source, its heuristics have to come from somewhere - and they're dramatically less useful if there isn't a centralized collection for them. The problem of cleaning up spam, trolling etc. is fundamentally a social problem, because the definitions of those things - at a level of detail high enough to be useful - are socially constructed.