Welcome to Codidact Meta!
Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.
Post History
I wish to open a theoretical conversation on some aspects of the site policies regarding AI. I do this purely out of an inherent interest in deep, rigorous, clear, and self-consistent argumentation...
#1: Initial revision
Some polemics regarding genAIcontent
I wish to open a theoretical conversation on some aspects of the site policies regarding AI. I do this purely out of an inherent interest in deep, rigorous, clear, and self-consistent argumentation, for the benefit of all, and not out of an ulterior motive to tip the balance of Codidact’s moderation policies in this or that way. See https://meta.codidact.com/posts/288194. > Presenting any non-original work as if it were your own is first and foremost plagiarism. Is there a validated concept of “original”? Aleatoric composers like John Cage used basically (pre-computer) “algorithms” to create musical works. These works may undermine traditional ideas about authorship. John Cage did not have complete deterministic control over the resulting form of his musical works. Yet, we might say he had a second-degree “responsibility” for their coming into existence. He did not design the works, but he designed the systems and processes which generated the works. We might then say, “the important part of originality is not so much how something came to be, but that it is not an intentionally deceptive attempt to take credit for what was done by somebody else.” Is this one of the points, of forbidding “unoriginality”? Is this an acknowledgment of the personal importance of “getting credit” for your ideas? In other words, what is the definition of “plagiarism”, and if it is “presenting any non-original work as if it were your own”, specifically why is that bad? > [AI] often misrepresents information in subtle ways that often go unnoticed by those not experts in the subject matter. I think this implies a principle not explicitly stated. If all content is peer-reviewed on Codidact, then AI errors should be caught. If there is a risk of AI contributing undetected erroneous information, then it implies that Codidact content is not fully peer-reviewed. It seems to imply that because not all Codidact content is thoroughly reviewed, we would rather err on the side of decreasing the *quantity* of information flowing in. Humans err too. This point requires more theoretical development to my mind, but I wonder if the actual underlying point here (partially) is actually about quantity rather than quality: Codidact’s users should be more than capable of identifying erroneous information, but the profligate use of automatically generated content means editors cannot keep up with the pace at which it can be generated. Is that true? I think there is more to say but I wanted to open an analysis with this.