Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Post History

69%
+7 −2
Q&A Should we have a network-wide policy against AI-generated answers?

The short answer is "no" but also a little "yes". The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fair...

posted 1y ago by Derek Elkins‭  ·  edited 1y ago by Lux‭

Answer
#2: Post edited by user avatar Lux‭ · 2023-06-04T14:37:08Z (over 1 year ago)
Minimal edits
  • The short answer is "No" but also a little "yes".
  • The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.
  • That said, I suspect most communities will not welcome such content. To that end, having a *model* policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to *develop* its own policy and, as a result, the policy across communities would be more uniform.
  • In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of the effectiveness and which are used in other communities. Additionally the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.
  • The short answer is "no" but also a little "yes".
  • The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.
  • That said, I suspect most communities will not welcome such content. To that end, having a *model* policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to *develop* its policy, and, as a result, the policy across communities would be more uniform.
  • In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of their effectiveness and which are used in other communities. Additionally, the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.
#1: Initial revision by user avatar Derek Elkins‭ · 2023-02-15T23:02:09Z (over 1 year ago)
The short answer is "No" but also a little "yes".

The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.

That said, I suspect most communities will not welcome such content. To that end, having a *model* policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to *develop* its own policy and, as a result, the policy across communities would be more uniform.

In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of the effectiveness and which are used in other communities. Additionally the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.