Some examples of platforms implementing these opt-out mechanisms: - Sketchfab now offers creators an option to block AI training in their account settings - DeviantArt pioneered these tags as part of their content protection approach - ArtStation added both meta tags and updated their Terms of Service - Shutterstock created a compensation model for contributors whose images are used in AI training
But here's where things get concerning - there's growing evidence these tags are being treated as optional suggestions rather than firm boundaries:
- Various creators have reported issues with these tags being ignored. For instance, a discussion on DeviantArt (https://www.deviantart.com/lumaris/journal/NoAI-meta-tag-is-NOT-honored-by-DA-941468316) documents cases where the tags weren't honored, with references to GitHub conversations showing implementation issues
- In a GitHub pull request for an image dataset tool (https://github.com/rom1504/img2dataset/pull/218), developers made respecting these tags optional rather than default, which one commenter described as having "gutted it so that we can wash our hands of responsibility without actually respecting anyone's wishes"
- Raptive Support, a company implementing these tags, admits they "are not yet an industry standard, and we cannot guarantee that any or all bots will respect them" (https://help.raptive.com/hc/en-us/articles/13764527993755-NoAI-Meta-Tag-FAQs)
- A proposal to the HTML standards body (https://github.com/whatwg/html/issues/9334) acknowledges these tags don't enforce consent and compliance "might not happen short of robust regulation"
Some creators have become so cynical that one prominent artist David Revoy announced they're abandoning tags like #NoAI because "the damage has already been done" and they "can't remove [their] art one by one from their database." (https://www.davidrevoy.com/article977/artificial-inteligence-why-i-ll-not-hashtag-my-art-humanart-humanmade-or-noai)
This raises several practical questions:
- Will this actually work in practice without enforcement mechanisms?
- Could it be legally enforceable down the line?
- Has anyone successfully used these tags to prevent unauthorized training?
Beyond the technical implementation, I think this points to a broader conversation about creator consent in the AI era. Is this more symbolic - a signal that people want some version of "AI consent" for the open web? Or could it evolve into an actual standard with teeth?
I'm curious if folks here have added something like this to their own websites or content. Have you implemented any technical measures to detect if your content is being used for training anyway? And for those working in AI: what's your take on respecting these kinds of opt-out signals?
Would love to hear what others think.
loading...