Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Post History

42%
+1 −2
Q&A What's more important for codidact - quality or helping questions get answered?

At last we are getting to addressing the elephant in the room head-on. Lundin and Olin's answers both capture ideas that I feel are very important, that apply to varying extents across Codidact. I'...

posted 1mo ago by Karl Knechtel‭

Answer
#1: Initial revision by user avatar Karl Knechtel‭ · 2024-03-25T21:37:58Z (about 1 month ago)
At last we are getting to addressing the elephant in the room head-on. Lundin and Olin's answers both capture ideas that I feel are very important, that apply to varying extents across Codidact. I'll include a couple collapsed sections of background, and then present a proposal.

<details><summary>

## There are fundamentally varying communities here
</summary>

The primary axis of comparison I want to consider here is *how technical* the subject matter is. **While this is not a binary, it influences necessarily binary policy choices**, and is in my mind by far the most important factor in those decisions.

Curiously enough, the current footer does a pretty good job of sorting the sites into technical sites at the top and non-technical sites at the bottom (notwithstanding the meta sites - Meta itself, and the Proposals site - which have their own considerations and which I consider out of scope here). The sites I'd expect to play by noticeably different rules, when a choice needs to be made, are the first seven - Software Development, Electrical Engineering, Mathematics, Power Users, Linux Systems, Code Golf and Physics - versus the others (although the latter disciplines do have some varying degree of technical precision to them).

To illustrate: almost every question worth asking about Software Development is either a precise **how** question - how to do some specific, concrete task - or a precise **why** question - understanding a surprising, but reproducible result from an isolated example. **Basically anything else has rapidly diminishing value** - no matter how *interesting* it might be to experts. For example, if historical questions like "when was X implemented?" are asked at all, they are better shuffled off to the side.

In particular, debugging a thorny problem with inter-related problems might be engaging, **but the result is not *reusable* for others**. Even the best-asked question of this sort is of a fundamentally different kind. In the long run, **everyone's time is best spent primarily on "canonical", "reference" material** that isolates single issues that can be easily described. There are two reasons for this: because decomposing problems into those issues is supposed to be a fundamental skill; and because the resulting questions are more easily answered directly, more easily phrased, more easily given accurate titles, and *far* more easily **searched for**.

On the other hand, consider for example Cooking. Most recipes are just not that delicate, and even for the ones that are (say, to make a souffle), one would hardly speak of "debugging" the process of making the food. Natural ingredients vary in consistency; taste is subjective; there is a fair amount of tolerance in cooking times and temperatures; and hardly anything *interesting* will come out quite the same every time. Suggestions about how to get a desired quality in the final product might be much more open-ended.

As such, while there are a *few* things that one might identify as reference material (perhaps about food safety, or explanations of basic knife skills...), most of what needs to be said about cooking is far more individualized.
</details>
<details><summary>

## Different communities are differently impacted
</summary>

My basic idea here is that **the less technical a site is, the less the conflict described in the OP exists**. If the subject is a pure art form, a matter of historical interest, etc. then answering questions *already inherently is* producing quality, to the extent that it's possible. If I want to know why my recipe turned out poorly, it's relatively less likely that this can be answered by generalizing from a category of common mistakes. Or else, it's much *more* likely that there's *more than one* thing I could have done better - and it's unreasonable to expect me to try and take apart the process, check individual results etc. (Especially since trying to do so could involve tasting unsafe raw ingredients!)

But if the subject is a *craft* that uses reliable materials and aims for reproducible results; if it's something where new ideas are *designed* rather than *inspired*; or if it's something entirely abstract - then the opposite applies. On a hypothetical Furniture site, for example, it would be unreasonable to post a picture of my broken chair and ask why it failed to support an obese person. I should instead be expected to consider individual joints in my design (and the techniques used to fasten them), ensure that they are as strong as I expect individually, and then perhaps I can ask a question about why the whole system is weaker than expected (and maybe get an answer that uses engineering software on a computer to do a detailed model simulation).

So, in my view, it's in the most technical areas where the problem emerges most strongly. (And that's why it's been so *noticeable* on Stack Exchange: Stack Overflow absolutely dominates the entire rest of the network, and even "the rest of the network" is dominated by spin-offs from Stack Overflow.)
</details>

## Fixing the problem for the communities that need it

Ironically enough, I conclude that **this seemingly "social" problem is best solved with technical measures**. Specifically: I propose that the day-to-day, personalized, "please fix this" questions should go *in a separate category*; and that sites should be able to *opt in to adding* such a category (and perhaps even making it the default for new questions.) There is one piece of desired new technical support here: the ability to close questions as duplicates cross-category. Everything else needed is, as far as I'm aware, already implemented.

Voting is **not sufficient** to separate out fundamentally different kinds of Q&A content, and it can **have harmful unintended effects**. In particular, keeping the useful content "above" less focused help requests generally requires downvoting the latter, which discourages new users. But on the other hand, while such voting is useful to the community, external search engines don't care about it. If a site is flooded with poorly-asked, unfocused questions, that makes it harder to find the good ones from outside - and such searches will happily find irrelevant questions with accidentally-clickbait titles (because OP misidentified the central problem).

By using a separate category, technical communities can keep useful reference Q&A filtered out from the chaff, while voting on it by separate standards appropriate to that content. Naively asked questions don't have to have terrible scores to keep them out of the way, and curators seeking to close a duplicate can instantly get better results just by filtering by category. Further, there is automatically a separate space that experts can trawl for recurring themes, which can then inform new self-answered canonicals. In short, it's **valuable metadata that cannot be encoded by post scoring** - since the Wilson score is already trying to tell us many other things.