Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Codidact Meta!

Codidact Meta is the meta-discussion site for the Codidact community network and the Codidact software. Whether you have bug reports or feature requests, support questions or rule discussions that touch the whole network – this is the site for you.

Post History

85%
+10 −0
Q&A What can be done to block Codidact content from getting used by crawlers/for AI training?

At what extent can we block "crawlers" and the like from stealing site content? What is technically possible? We can block at least the OpenAI crawler and the Google-Extended crawler (for Gemi...

posted 1mo ago by Mithical‭

Answer
#1: Initial revision by user avatar Mithical‭ · 2024-05-15T12:26:59Z (about 1 month ago)
> At what extent can we block "crawlers" and the like from stealing site content? What is technically possible?

We can block at least the OpenAI crawler and the Google-Extended crawler (for Gemini) through the `robots.txt` file. We've been discussing this in the admin room for the past few days, and while nothing has been done as of yet, the general sentiment has been leaning towards blocking these AI crawlers.

If the community indicates support for such a move, we'll most likely block AI crawlers to the extent possible, at least for crawlers that we're aware of and have documented methods of blocking. (We don't want to block _all_ crawlers, since that would mess up e.g. the Wayback Machine and search engines.)