Welcome to Codidact Meta!
Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.
Should we have a network-wide policy against AI-generated answers?
We had this discussion over at Software Development: Should we allow answers generated by ChatGPT? The general consensus so far seem to be that such answers shouldn't be allowed. Mostly because they are low quality content.
But now I'm seeing such answers popping up on other Codidact communities as well. Rather than having every single Codidact community come up with their own rules for this, I think a network-wide policy would be more sensible.
Particularly since that means that community-specific moderators might get backup in spotting and moderating such posts - they aren't easy to spot at a glance. It would also be nice if all such answers would be moderated after the same standard & methods of spotting them no matter which Codidact site the answer was posted at.
Given that we decide not to allow them, we should also standardise disciplinary measure across the sites. As in: the punishment for posting something generated by an AI undisclosed, pretending to be the author. After "Someplace Else" introduced their policy against such answers, posting one leads to a 1 week suspension at the first offense.
Whereas examples like this where they openly state ChatGPT as the source shouldn't lead to disciplinary measures IMO - just post deletion.
Matters to discuss:
- Should we have a network-wide policy against AI generated answers and if so how to phrase it?
- How to moderate such posts? "Someplace Else" has not publicly shared how moderators spot such AI-generated content, but there are various online programs for this, of diverse reliability.
We have posted our default policy for generative-AI content. It's more of a clarification than an actual change; as poin …
1y ago
Any text you didn't write yourself must be set off in a quote box and attributed. I believe this is already existin …
2y ago
The short answer is "no" but also a little "yes". The ethos of Codidact is that the individual communities determine …
1y ago
3 answers
- Any text you didn't write yourself must be set off in a quote box and attributed. I believe this is already existing policy, and applies regardless of where the quoted text came from (like another person or an AI).
Failing to properly quote and attribute others' words as your own is plagiarism, and should be punished beyond just post deletion. The punishment should be suspension from a day or two to a month depending on circumstances as judged by the moderator. The egregiousness of the infraction, and the general attitude and previous history of the user are to be taken into account. Multiple repeated violations should result in permanent account suspension. Again, we should generally leave the severity of the punishment to the moderator's judgement.
- When quoting any non-definitive source, whether AI or not, the quoting should be minimal and accompanied by discussion that adds its own value. Non-definitive quotes are not content, but can be useful for spurring a response or providing context to an explanation. Answers that are mostly non-definitive quotes are not useful and should not be allowed.
- All AI-generated content is non-definitive. A definitive source is one that relies on the reputation of someone with the proper credentials or experience to be significantly more credible than an average person.
0 comment threads
We have posted our default policy for generative-AI content. It's more of a clarification than an actual change; as pointed out in another answer, representing Chat-GPT or any other content that you did not create as yours is plagiarism and was already disallowed. When you use another's material you need to disclose it. Communities were also already empowered to moderate low-quality content, such as answers that consist entirely of quotes from unreliable sources. That's true whether the unreliable source is a blog, AI output, or something you heard someone say at a conference.
Our communities have autonomy to make decisions about AI, either more strict or more lenient (within our attribution policies), and we'll support our moderators and communities however we can.
0 comment threads
The short answer is "no" but also a little "yes".
The ethos of Codidact is that the individual communities determine themselves what is and is not appropriate for their communities (with some fairly minimal network-wide policies). There may well be communities where ChatGPT produced content is welcome.
That said, I suspect most communities will not welcome such content. To that end, having a model policy that individual communities could adapt to their communities makes sense. In other words, each community would have its own policy (including no policy), but each community wouldn't need to develop its policy, and, as a result, the policy across communities would be more uniform.
In this vein, another valuable activity at the network level would be centralizing resources on identifying ChatGPT produced content. This could be a page that references tools with some evaluation of their effectiveness and which are used in other communities. Additionally, the page may include guidance on how to use these tools efficiently in a moderation workflow or get the most use out of them.
2 comment threads