Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Comments on Should we have a network-wide policy against AI-generated answers?

Post

Should we have a network-wide policy against AI-generated answers?

+10
−1

We had this discussion over at Software Development: Should we allow answers generated by ChatGPT? The general consensus so far seem to be that such answers shouldn't be allowed. Mostly because they are low quality content.

But now I'm seeing such answers popping up on other Codidact communities as well. Rather than having every single Codidact community come up with their own rules for this, I think a network-wide policy would be more sensible.

Particularly since that means that community-specific moderators might get backup in spotting and moderating such posts - they aren't easy to spot at a glance. It would also be nice if all such answers would be moderated after the same standard & methods of spotting them no matter which Codidact site the answer was posted at.

Given that we decide not to allow them, we should also standardise disciplinary measure across the sites. As in: the punishment for posting something generated by an AI undisclosed, pretending to be the author. After "Someplace Else" introduced their policy against such answers, posting one leads to a 1 week suspension at the first offense.

Whereas examples like this where they openly state ChatGPT as the source shouldn't lead to disciplinary measures IMO - just post deletion.

Matters to discuss:

  • Should we have a network-wide policy against AI generated answers and if so how to phrase it?
  • How to moderate such posts? "Someplace Else" has not publicly shared how moderators spot such AI-generated content, but there are various online programs for this, of diverse reliability.
History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

Reliability of AI detection (1 comment)
Moderation isn't discipline (3 comments)
Moderation isn't discipline
Derek Elkins‭ wrote about 1 year ago

I find the framing of "disciplinary measures" both distasteful and inaccurate. Community members aren't children that need to be "punished" nor are suspensions all that much of a punishment. If someone violates a policy through ignorance, they should, outside of extreme cases, be corrected and "punishment" makes no sense. If they persist after correction, then suspensions/bans throttle their input eventually to zero. It's not a matter of "punishing" them; it's a matter of decreasing their noise and disruption of the community.

Lundin‭ wrote about 1 year ago

Derek Elkins‭ It might be a tad bit optimistic to believe that people are posting AI-generated answers as their own out of a willingness to contribute positively to the community. Rather, from what I've seen on SE there's a close relation between such answers and spam. It's often users with commercial interest that wish to advertise something through their profile that post them, in order to draw attention to their account.

Derek Elkins‭ wrote about 1 year ago

If someone is posting content in their own self-interest to the detriment of the community, then ban them. I'm not sure where you're reading in my comment an opinion one way or another about how correlated AI-generated answers are to good intent. All I'm saying is either someone violated a policy unwittingly and thus informing them should suffice to correct the behavior and "punishment" would be counter-productive, or they just don't care what the policy is and they should just be banned as they are not desired in the community.