Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Comments on Scoring System for Trust Level Requirements

Parent

Scoring System for Trust Level Requirements

+14
−1

Currently, we're planning to implement a system for user privileges based on Trust Levels.

These are of the form of 'if you satisfy [these requirements], you get [these perks]', where [these requirements] are generally of the form e.g. "at least 50 accepted edits".

Continuing this example, what this doesn't take into account is the number of rejected edits - if a user has 50 accepted edits out of a total of 200 suggested edits (i.e. has 150 rejected edits), then I for one would be hesitant to give that user the ability to directly edit.

At this point, it appears that the solution would be to come up with some method of figuring out the number of accepted edits in order to 'balance out' the rejected edits, or having some system of '> x accepted edits and < y rejected edits within the past [some time-scale]'. However, we're already using a system that estimates the probability of a successful outcome of a binary choice (e.g. 'accept' and 'reject'), given some data. That is, our post scoring system.

I'm therefore proposing that we use this same scoring system to 'score' each individual requirement in the Trust Levels: (accepts + N) / (accepts + rejects + 2N), for N=2. The requirement of 'at least 50 accepted edits' could then be replaced with a requirement of 'an edit-score of at least 0.95'. This could similarly be applied to create a user post-score, where 'accepts' is the total number of upvotes across all posts and 'rejects' is the total number of downvotes.

As we're planning on getting rid of rep and not replacing it with any number (other than trust levels), an individual user's score should perhaps only be visible to that user. For easy visualisation, it could also be displayed in a radar chart such as

Radar chart for user scores

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

General comments (6 comments)
Post
+1
−1
I'm therefore proposing that we use this same scoring system to 'score' each individual requirement in the Trust Levels: (accepts + N) / (accepts + rejects + 2N), for N=2.

This is good enough for as measuring the success of some specific activity. However, trust should be earned with a user's broader participation also in mind.

There are really two classes of privileges, the merely mechanical ones, and those that exercise some level of policy. For example, editing is a mechanical privilege, whereas opening/closing questions is a policy privilege.

It might be OK to be allowed to edit posts without review by demonstrating a good edit history alone. However, I wouldn't want someone opening/closing questions without having shown broader site participation. To exercise policy, one needs to really understand the site. That can't be done just by watching, or having completed a particular task successfully a few times. You really want a measure of being invested in the site. I don't want someone making policy decisions that doesn't have skin in the game, so to speak.

You keep saying you don't like rep, but some measure of having provided widely accepted value is useful for lots of reasons. I won't go into the others here, but this should be one of the factors in allowing policy privileges.

The open/close privilege is a good example. I would say that successfully answering lots of questions is necessary for deciding whether a particular questions should be allowed. Actually answering questions gives you a different perspective than someone merely viewing them as a bystander. Bystanders shouldn't be allowed to make policy decisions.

So to answer your question, your formula could be OK for some types of privileges, but not others. There still needs to be an overall measure of having provided value and being active on the site that is necessary for other types of privileges.

About your specific formula: It's probably effective enough for the mechanical privileges, but it's rather unintuitive and difficult to provide easy per-site controls. Ease and clarity of the controls are also important. In fact, I think they are more important than mathematical elegance, or even theoretical "rightness" (within limits). For example, for the edit priviledge, I'd prefer a set of rules like:

  1. Must have at least AA accepted edits.
  2. No more than BB percent of edits rejected.

This is very easy to compute, but more importantly, it is easy and intuitive to control and adjust.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

General comments (5 comments)
General comments
Monica Cellio‭ wrote almost 4 years ago

In our current spec for trust levels, trust for close votes is based on successful flagging. That could possibly be refined; on SE you wouldn't want the people running Smoke Detector auto-flaggers for spam to earn privileges on sites they're not otherwise on. But the basic idea is that they're both content moderation, and question flags include flags to close.

Olin Lathrop‭ wrote almost 4 years ago · edited almost 4 years ago

@Monica: Yes, how well accepted close flags are is a good metric. But, it's just one metric. There still needs to be some participation threshold, "skin in the game". You don't want a close-class of user emerging where all they do is close questions without actually participating in the site in a more meaningful way.

Sigma‭ wrote almost 4 years ago

I agree that the current formula is somewhat complex and fragmenting it across a bunch of different metrics makes it even less intuitive. I have three semesters of stats and have to think through the math carefully - how is a casual user expected to grasp what is going on without frustration?

Sigma‭ wrote almost 4 years ago · edited almost 4 years ago

"You don't want a close-class of user emerging where all they do is close questions without actually participating in the site in a more meaningful way."

Sure, it would be nice if they did more, but shouldn't people be able to choose the level of contribution they're comfortable at? If they want to act as a glorified spambot, why should we prevent that? Especially if it no longer affects their access to higher privileges on other actions, I don't see the issue.

Olin Lathrop‭ wrote almost 4 years ago

@Sigma: Because when you don't participate for real, you will have a different view of what a good question is.