Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Review Suggested Edit

You can't approve or reject suggested edits because you haven't yet earned the Edit Posts ability.

Approved.
This suggested edit was approved and applied to the post 11 months ago by ArtOfCode‭.

13 / 255
  • The short answer is "No" but also a little "yes".
  • The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.
  • That said, I suspect most communities will not welcome such content. To that end, having a *model* policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to *develop* its own policy and, as a result, the policy across communities would be more uniform.
  • In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of the effectiveness and which are used in other communities. Additionally the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.
  • The short answer is "no" but also a little "yes".
  • The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.
  • That said, I suspect most communities will not welcome such content. To that end, having a *model* policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to *develop* its policy, and, as a result, the policy across communities would be more uniform.
  • In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of their effectiveness and which are used in other communities. Additionally, the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.

Suggested 11 months ago by Lux‭