Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Comments on Support subjective scoring

Parent

Support subjective scoring

+2
−1

Codidact, like many other sites, relies on the wisdom of the majority. The approach has some well known flaws. One is "tyranny of the majority" which is really just a special case of "what if the majority votes wrong?"

Early on communities are small and tight knit. They're obscure and a certain type of person is preselected to join them. The small nature encourages cohesion and people naturally try to get along with each other. The votes work beautifully. Those of us with SO accounts (and reddit, and...) older than 2010 have all seen it.

As the community grows, all sorts of different people start arriving, and there is no longer homogeneity. The community's mass increases inertia, inertia makes individuals take interactions for granted. They stop trying to get along and instead want to disrupt, rebel against or control the community. Principled voting declines and voting blocks based on ideology begin to form. This greatly undervalues voting to users who try to leverage the wisdom of the crowd to find good and bad answers.

What if:

  • The site analyzes past voting patterns to find which users have voted the same way on the same types of questions
  • A vote similarity score is calculated for each pair of users (this can be done more efficiently than O(N^N))
  • The "subjective score" is the same vote tally, but weighted by similarity of each vote's users

This would give users a low impact, peaceful way to handle controversy in the community. People are mad about homework questions, but you love helping with homework? No problem, you'll see homework questions because you vote them up, they won't see them because they don't.

The technical challenges to implementing this aside, what if such a system was in place? Would it help improve the overall user experience?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

Pending edit? (4 comments)
This looks like an interesting idea, though my gut reaction is "are we turning into social media?" du... (1 comment)
Post
+4
−0

This is an interesting proposal. Although I think it would be detrimental in its current form, I'd like to emphasise that I would like to see more discussion of potential modifications to the way the site is curated and presented. I don't think this should ever be regarded as "finished" - improvements should always be sought.

As such I've added an upvote to the question to indicate that this is an important discussion to have.

My concerns

Seeing less

If you see mostly posts that others with similar voting patterns like, then this may lead to an echo chamber effect where some posts that would benefit you are hidden from you.

Curating less

Votes on Codidact are not just about expressing what you like and want to see, they are indicating which posts are beneficial to the community.

Since the community is responsible for measuring the quality of a post, showing people only what they are expected to upvote may have a cost in quality. If all of the people who might downvote a post are never shown it, this means that the people best placed to recognise a problem are prevented from seeing it.

This is not just a voting bias. The people who would comment to help explain how a post can be improved will also be kept away from it. I suspect this may also lead to a decrease in suggested edits. Every post is only presented to those who like it, so everything gets upvotes and the critical analysis of a community of experts is removed.

Testing

My concerns have not been measured - they are just my own opinions. Personally I would prefer not to experiment with this on codidact.com unless there is a strong reason to believe there will be a benefit, but that is not my decision.

If a Codidact community expresses interest in trying out this approach or one like it, it would just need someone to make the required modifications to the software.

The QPixel software which runs Codidact is open source and available for anyone to modify and set up their own instance to test this if they wish, even if there isn't an existing Codidact community that wants to run a pilot.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

2 comment threads

Votes are likes, this cannot be changed (2 comments)
Why would echo chamber be an issue? (8 comments)
Why would echo chamber be an issue?
matthewsnyder‭ wrote over 1 year ago

Usually, "echo chamber" is used to criticize political polarization and avoiding news that challenge one's ideology. This seems like it wouldn't be a concern on a Q&A site, where we discuss not political opinions and current events, but factual questions about politically neutral questions. What sort of echo chamber could possibly develop among people discussing the Django API, for example?

Seems to me that the whole point of this site is to allow people to filter relevant posts via tags and site sections.

Derek Elkins‭ wrote over 1 year ago

I don't know that we have a community on Django specifically, but there are a lot of "ideologies" in seemingly "factual" and "politically neutral" topics. For example, for programming and mathematics we have: choice of programming language, static vs dynamic typing, SQL vs NoSQL, desired level of abstraction, math foundations, pro- vs anti-category theory, levels of formality. I could even see valuing different trade-offs leading to splits even for the Django example. In my experience, primarily on Math.SE and CS.SE, people don't really vote based on these values, at least not directly.

Essentially what you are suggesting is a basic recommendation algorithm for the express purpose of forming filter bubbles. The difference with filtering based on tags is that it's an explicit decision made by the user with explicit criteria that the user can easily change. One of the problems with filter bubbles is people don't necessarily know that they are in one and can't easily get out of it.

Derek Elkins‭ wrote over 1 year ago

I do think there's a difference between content-level "ideologies", e.g. political ones, and meta-level "ideologies", e.g. whether homework questions should be allowed. I think the latter is more relevant to what you're considering, and filter bubbles and echo chambers have different consequences for them. One problem with both cases is that you have multiple communities sharing the same virtual space while being ghosts to each other. Why not simply have separate communities then? For meta issues, the problem with this is the communities have conflicting policies. A new "homework" question comes in. One user downvotes and votes to close commenting that such questions aren't welcome based on that user seeing tons of such questions heavily downvoted. Another user comments that they are welcome because they see those same questions as heavily upvoted. You also have the issue of reputation in one community being shared across multiple.

matthewsnyder‭ wrote over 1 year ago

choice of programming language, static vs dynamic typing, SQL vs NoSQL, desired level of abstraction

Are any of these on-topic for https://software.codidact.com/? They all seem irrelevant in that asking those things is against the rules to begin with, and I can't imagine anybody voting based on these. (can't comment on the math ones due to lack of experience)

meta-level "ideologies", e.g. whether homework questions should be allowed

Thank you for bringing this up - this is exactly what I'm trying to get at. To put it in your terminology, I'm saying let's create a filter bubble so that you see only the types of question people like you want to see. Except I'd prefer to not use the term "filter bubble", because it's a pejorative about political discussion.

matthewsnyder‭ wrote over 1 year ago

while being ghosts to each other

So, no - because I'm not talking about filtering users. That would indeed not work well. In your example, people who upvote homework questions would see homework questions. People who don't upvote them wouldn't. Both groups would see non-homework questions, and interact on those, which they presumably both upvote. So really only some questions would be ghosts, not users. Does this not seem like a reasonable vision for a Q&A site? (I'm assuming the answer is no, as I'm becoming increasingly pessimistic about my proposal)

Derek Elkins‭ wrote over 1 year ago

A question doesn't have to be explicitly about a topic for people's "ideological" views to be relevant. If someone asks how to implement Dijkstra's algorithm a person could downvote an answer because it's in a programming language they dislike. I'm sure people do do stuff like this, but, as I said, I don't think it happens to a meaningful amount.

A core aspect of your proposal is that the only problem with content you downvote is that you see it. Things like rep and policies are ones I've mentioned. Another is the new user experience. A lot of people don't like homework questions, because they don't want the site to devolve into a homework help site because of the reputation such sites have and the kind of people they attract. Those people will downvote the homework questions they see, but new users will see that they are upvoted overall (because more people who like homework questions will see it).

Derek Elkins‭ wrote over 1 year ago

Your proposal also makes voting much more of a like/dislike than it is. While there is definitely some degree of like/dislike in voting, currently, the only thing voting does is communicate to others. It changes nothing about your experience with the site. In your scheme, voting directly impacts what you see, so you are incentivized to actually vote your preferences rather than vote on the merits of the question.

trichoplax‭ wrote over 1 year ago

People's voting will be for a mixture of reasons, some of them political, and this proposal will not distinguish the reasons. This means that filtering based on prejudice will implicitly happen, which would be much easier to detect with tags than with automatically adjusted vote values.

I like the idea behind the proposal - a way of automatically filtering to reduce the amount of manual filtering with tags. However, in practice I would expect this to separate more than just question types. A homework tag could be used to filter based on whether you want to see homework questions. If you do want to see homework questions, you can still downvote bad homework questions (for whatever your personal definition of bad is). A vote adjustment will instead not only show you homework questions, it will show you only the homework questions you would upvote. This means the bad questions get only a positive reception. It also means they only get answers from people with certain biases.