Welcome to Codidact Meta!
Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.
Probationary shadowban for new accounts
I'd like for communities to be able to opt in to a feature that shadowbans new accounts for a brief period of time. Moderators should be able to release accounts from the shadowban manually; they should also be released from the shadowban after the time period expires.
The initial duration can be quite short—an hour, maybe?—but ‘suspicious activity’, definition TBD but with some suggestions below, should rapidly increase the duration of the shadowban (say by doubling it for each red flag).
This would help prevent users who have been subject to other sorts of anti-abuse moderation (like STAT) from simply creating new accounts on new IP addresses and continuing their existing patterns of behavior. Across Codidact, we have, to my knowledge, exactly one user who does this, and does it frequently, and I would like it to stop once and for all.
Edit: The value of a short shadowban has been questioned in one answer, and ironically enough the motivation I had for that is in line with a different answer: the idea is to give the suspicious user enough rope to incriminate himself. Post once, and your post appears to the rest of the world in an hour. Post again before that hour is up, with some red flags in the second post, and now the ban is extended, and both posts remain hidden from the world for longer. This is better than a rate limit, because it lets us see the true colors of the suspicious user faster. And it's better than not imposing the shadowban at all initially, because there is no interval during which other users can be distracted by the posts that would be retroactively obliterated if the new user is successfully re-exiled.
Of course this is all predicated on moderators taking an active role in responding to new activity and firmly rejecting ban-evading accounts; nothing I could propose can substitute for that. But if I were such a moderator, I would want a tool like this to make it so that I can keep the community clean without having to watch it 24/7 for sneaky ban evaders.
Suggestions for ‘suspicious activity’:
- Posting multiple questions all at once
- Questions that are unusually short
- Per-community lists of suspicious words—I can be 99.99% certain that any new account in the Mathematics community that posts a question with the words ‘pictorialize’ or ‘soothsay’, for example, is That Guy
I'm not quite sure what you mean by "shadowban", but this does pretty much exist already. Most communities on the net …
1y ago
If it's possible to characterize the problem as being caused by "That Guy" - i.e., a singular person behind it all - the …
1y ago
Jon Ericson takes the opposite position: find the miscreants as quickly as possible. > My thinking has been heavily i …
1y ago
I only believe people should be banned if they did something wrong. They should be banned, and then should know that the …
4mo ago
4 answers
I'm not quite sure what you mean by "shadowban", but this does pretty much exist already.
Most communities on the network - I believe all of them except for Electrical Engineering at the moment - currently have "new site mode" enabled, which grants all new users the Participate Everywhere ability, which removes certain new-user rate limits.
If a community decides that the time has come to turn "new site mode" off, the Codidact Team can do that pretty easily as long as that's what that community decides. That means new users would be granted the "Participate" ability and not "Participate Everywhere" immediately upon joining the community.
Currently, the Participate ability allows you to post 3 top-level posts (questions, articles) and 10 second-level posts (answers) within 24 hours. The Participate Everywhere ability significantly raises those rate limits.
Moderators on individual communities can suspend or remove individual abilities from specific users. If someone is abusing the Participate Everywhere ability, a moderator can remove it. If someone is abusing the Participate ability, a moderator can suspend that ability. Moderators can also issue broader suspensions for individual users. (Ban evasion is a legitimate reason to issue a suspension.)
This is usually the best way to handle a disruptive individual, and doesn't require the intervention of the Codidact Team - community moderators have the tools to handle that at their disposal.
The Codidact Team does have some stronger tools available, such as STAT, and we're around if the mods need us to step in (such as through Discord, through flag escalations, or just noticing what's going on).
The rate limit values for both users with Participate and Participate Everywhere can also be adjusted on a per-community basis:
However, I don't think it would be a good idea to start fiddling with those values because of a single disruptive user. We have other tools more appropriate for handling that, and you have access to the most basic and yet most important one: Flagging.
If you spot an issue, flag it. If you think someone's evading STAT or some other block, raise a flag. The community moderators will see the flag, and likely the Codidact Team as well. But if nobody is flagging suspicious behavior, it's much more likely to slip under the radar.
Jon Ericson takes the opposite position: find the miscreants as quickly as possible.
My thinking has been heavily influenced by this philosophy of moderation written by the late Shamus Young. A key quote:
Instead of making rules to compel crazies to behave – which can become a full-time enforcement project – I allow them to act out. And then I ban them. I want to know who the crazy people are, as fast as possible. The sooner they reveal their character, the sooner I can pull them out of the pool before they make a mess. This isn’t hard. Problem People are usually easy to spot.
Another way to put it is that moderators have the site rules/guidelines as tools designed to be a standard of behavior. But if people are breaking the community without breaking the rules, we don't need to just throw up our hands. I strongly suspect a few people (maybe as few as one) cause most of the problems on most sites. I might have a hard time sorting things out because there are a lot of accusations being thrown around. Just because someone breaks the rules sometimes doesn't mean they incapable of being decent members of the community.
I understand that the problem you describe is that it's annoying to repeatedly find the same person again and again, but I submit that it's still better to find them unambiguously than it is to rules-lawyer them into misbehaving in a way that's harder to detect.
If it's possible to characterize the problem as being caused by "That Guy" - i.e., a singular person behind it all - then unless the problem involves a botnet or something, it's small enough that we shouldn't need to come up with new technical solutions just because that problem user exists. To me, that has the flavour of a "bill of attainder", and I'm opposed to it.
Shadowbans are also just generally evil IMO. The first time I saw the term was when the feature was first discovered by the Reddit userbase. This led to massive drama when it turned out that the feature wasn't only being applied to spammers. Almost everyone seemed to agree that this is a terrible way to treat actual human users; modern Internet-powered life is atomizing enough without having community-oriented sites silently connecting the other end of one's line of communication to /dev/null.
The reason shadow-banning works for spammers is because they don't stick around for feedback. Human users, however, will inevitably notice when they don't get any answers or comments or votes - ever, including on Meta (which is readily available to them and not at all hidden or shuffled out of the way). On the other hand, a short one-hour shadow-ban of the sort you describe - I assume the idea is that it's short enough not to be obviously detected in this sort of way - doesn't seem useful.
In the rare cases where someone needs to be targeted, it's better that they are at least aware of the situation. Someone who consistently runs into an automated "Your question has been automatically put on hold, pending review", followed by consistent deletion upon such reviews, will get the message. This also minimizes disruption in the case of a false positive (or in the case that That Guy gets a clue).
As site posting volumes increase, it could be useful to have some stepped system of "lockdown" mechanisms. These could range from varying numbers of flags needed to take some automated action on a post; to having questions default to a closed state and requiring action from curators to approve (i.e. open) them; and many other possibilities which should be discussed separately (the options probably don't even neatly fit on a spectrum).
I only believe people should be banned if they did something wrong. They should be banned, and then should know that they are banned, why, and for how long.
People who have not done anything wrong yet should not banned (shadowbanned) at all, if there is no pressing reason to do so. Unless Codidact receives a sudden spam wave caused by newly registered accounts, and they cannot be filtered in any other way, such as by applying IP-based blocks to locations from which large volumes of spam originate, applying a shadowban to all new accounts is unnecessarily degrading the experience to new users, by causing their posts to have zero engagement for seemingly no reason, until the shadowban expires.
I say no, unless a pressing need emerges, in which case, this should be a last resort.
2 comment threads