Welcome to Codidact Meta!
Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.
Comments on Probationary shadowban for new accounts
Parent
Probationary shadowban for new accounts
I'd like for communities to be able to opt in to a feature that shadowbans new accounts for a brief period of time. Moderators should be able to release accounts from the shadowban manually; they should also be released from the shadowban after the time period expires.
The initial duration can be quite short—an hour, maybe?—but ‘suspicious activity’, definition TBD but with some suggestions below, should rapidly increase the duration of the shadowban (say by doubling it for each red flag).
This would help prevent users who have been subject to other sorts of anti-abuse moderation (like STAT) from simply creating new accounts on new IP addresses and continuing their existing patterns of behavior. Across Codidact, we have, to my knowledge, exactly one user who does this, and does it frequently, and I would like it to stop once and for all.
Edit: The value of a short shadowban has been questioned in one answer, and ironically enough the motivation I had for that is in line with a different answer: the idea is to give the suspicious user enough rope to incriminate himself. Post once, and your post appears to the rest of the world in an hour. Post again before that hour is up, with some red flags in the second post, and now the ban is extended, and both posts remain hidden from the world for longer. This is better than a rate limit, because it lets us see the true colors of the suspicious user faster. And it's better than not imposing the shadowban at all initially, because there is no interval during which other users can be distracted by the posts that would be retroactively obliterated if the new user is successfully re-exiled.
Of course this is all predicated on moderators taking an active role in responding to new activity and firmly rejecting ban-evading accounts; nothing I could propose can substitute for that. But if I were such a moderator, I would want a tool like this to make it so that I can keep the community clean without having to watch it 24/7 for sneaky ban evaders.
Suggestions for ‘suspicious activity’:
- Posting multiple questions all at once
- Questions that are unusually short
- Per-community lists of suspicious words—I can be 99.99% certain that any new account in the Mathematics community that posts a question with the words ‘pictorialize’ or ‘soothsay’, for example, is That Guy
I'm not quite sure what you mean by "shadowban", but this does pretty much exist already. Most communities on the net …
1y ago
If it's possible to characterize the problem as being caused by "That Guy" - i.e., a singular person behind it all - the …
1y ago
Jon Ericson takes the opposite position: find the miscreants as quickly as possible. > My thinking has been heavily i …
1y ago
I only believe people should be banned if they did something wrong. They should be banned, and then should know that the …
5mo ago
Post
If it's possible to characterize the problem as being caused by "That Guy" - i.e., a singular person behind it all - then unless the problem involves a botnet or something, it's small enough that we shouldn't need to come up with new technical solutions just because that problem user exists. To me, that has the flavour of a "bill of attainder", and I'm opposed to it.
Shadowbans are also just generally evil IMO. The first time I saw the term was when the feature was first discovered by the Reddit userbase. This led to massive drama when it turned out that the feature wasn't only being applied to spammers. Almost everyone seemed to agree that this is a terrible way to treat actual human users; modern Internet-powered life is atomizing enough without having community-oriented sites silently connecting the other end of one's line of communication to /dev/null.
The reason shadow-banning works for spammers is because they don't stick around for feedback. Human users, however, will inevitably notice when they don't get any answers or comments or votes - ever, including on Meta (which is readily available to them and not at all hidden or shuffled out of the way). On the other hand, a short one-hour shadow-ban of the sort you describe - I assume the idea is that it's short enough not to be obviously detected in this sort of way - doesn't seem useful.
In the rare cases where someone needs to be targeted, it's better that they are at least aware of the situation. Someone who consistently runs into an automated "Your question has been automatically put on hold, pending review", followed by consistent deletion upon such reviews, will get the message. This also minimizes disruption in the case of a false positive (or in the case that That Guy gets a clue).
As site posting volumes increase, it could be useful to have some stepped system of "lockdown" mechanisms. These could range from varying numbers of flags needed to take some automated action on a post; to having questions default to a closed state and requiring action from curators to approve (i.e. open) them; and many other possibilities which should be discussed separately (the options probably don't even neatly fit on a spectrum).
2 comment threads