Really, you made this without AI? Prove it

Foto: The Verge AI
As many as twelve different organizations are currently attempting to push through their own standards for "AI-free" content labeling, creating chaos instead of the expected transparency. Faced with a flood of generative materials, creators increasingly find themselves forced to prove that their work is the product of human hands. Although standards such as C2PA were intended to automate this process, their effectiveness remains negligible, forcing artists to turn to certifications like "Not by AI" or "Human Authored." The problem lies in the lack of a coherent definition of what "human-made" means today. With AI tools integrated into professional software, the line between inspiration and automation is blurring, and verification often relies solely on trust or tedious audits of sketches and drafts. For global users, this signifies a growing crisis of confidence in digital content—without top-down regulations similar to organic food certifications, any graphic or text could be unfairly dismissed as "digital waste." Instead of facilitating the identification of authenticity, the current excess of conflicting labels makes proving authorship an additional, unpaid full-time job for creators. A unified global standard remains merely a proposal for now, while technology outpaces attempts to tame it.
“It looks like AI.” For a modern creator, writer, or photographer, this sentence sounds almost like a sentence or an accusation. In a world where generative artificial intelligence mimics human craftsmanship with extraordinary precision, audiences are becoming increasingly skeptical. The problem is exacerbated by the fact that internet platforms often avoid clear labeling of content generated by algorithms, leading to a growing crisis of trust. Since machines have no interest in admitting their origins, humans must take the initiative.
A solution that is resonating louder and louder in the creative industry is the creation of a universal “human-made” designation – a kind of ethical certificate, reminiscent of the Fair Trade logo known from commerce. As Adam Mosseri, head of Instagram, noted, as technology develops, it will become more practical to label “real” media than to try to catch all the fakes. According to a report by the Reuters Institute, web users have a strong sense that search results and social media are already saturated with synthetic content, which only intensifies the need for physical proof of human authorship.
The failure of standards and the chaos of new certification
Previous attempts to regulate the market, such as the C2PA (Content Credentials) standard, have proven largely ineffective. Despite support from giants like Adobe, Microsoft, and Google, this system is being ignored en masse by entities profiting from AI anonymity. Creators of “slop” (low-quality mass-generated content) and disinformation agents have a direct financial interest in hiding the fact that their work did not come from human hands. In response to this gap, at least 12 different initiatives have emerged offering alternative “AI-free” labels.
Read also

The problem lies in the lack of consistency. Organizations such as the Authors Guild are introducing “human authored certification” dedicated exclusively to books, while projects like Proudly Human or Not by AI try to cover a wide range of media – from graphics to music. Each of these organizations uses a different verification methodology. Some, like Made by Human, rely solely on trust and allow anyone to download the graphic, making them useless in the fight against fraudsters. Others, like No-AI-Icon, declare manual inspection of works, which seems unsustainable in the long term given the current scale of internet production.
Where does the human end and the algorithm begin?
The biggest challenge for new certification systems is the definition of what “human-made” actually means. In an era where AI tools are integrated directly into professional software (like Photoshop or Word), the boundary becomes fluid. Jonathan Stray from the UC Berkeley Center for Human-Compatible AI poses a key question: does discussing an idea with an LLM model before manually executing it disqualify a creator from receiving the label? The lack of clear regulations, such as those enjoyed by “organic” products, makes “AI-free” certification a battlefield for definitions.
Some services are trying to find a middle ground. The Not by AI initiative offers badges for creators, provided that at least 90 percent of the work is the result of human labor. However, this is a purely declarative approach. On the other hand, platforms like Proof I Did It rely on blockchain technology. They create an immutable record of the creative process, generating digital certificates that mathematically prove the history of the file's creation. Thomas Beyer from the Rady School of Management suggests that Web3 could create a “premium segment” in the art market, where authenticity will be guaranteed cryptographically rather than just visually.
Stigmatization and the economy of deception
The motivation to hide AI involvement is huge, especially where big money is involved. An example is Coral Hart, a romance author who produced over 200 novels using AI in a year, earning six-figure sums. Hart openly admits that she does not label her works as generated by algorithms because she fears the “stigmatization” that could destroy her business. A similar mechanism works for AI influencers or creators of pornographic content based on digital clones – revealing the truth would destroy the illusion that users pay for.
- C2PA: A technical standard supported by Big Tech, currently ignored by mass AI creators.
- Not by AI: A labeling system based on a 90% human input threshold, but relying on voluntary declaration.
- Proudly Human: An initiative aiming for audit certification, threatening legal action for misuse of the logo.
- Blockchain: A method of permanently recording creation history, eliminating guesswork in favor of mathematical proof.
Trevor Woods, CEO of Proudly Human, admits that fighting logo misuse will be difficult and may require legal intervention. Currently, however, talks between label creators and governments are in their infancy. The development of AI capabilities is progressing much faster than the reactions of regulatory bodies. Until a single, universally recognized standard is established, creators will be forced to prove their “humanity” on their own, often showing sketches, drafts, and recordings of the creation process as the only credible evidence.
In a world flooded with synthetic “slop,” the value of human, biological creativity may paradoxically increase. The key to the survival of professional creators, however, is not to fight the technology itself, but to build an infrastructure of trust that allows the audience to distinguish craftsmanship from an algorithm. Without the unification of these efforts, the “human-made” badge will remain just another insignificant image in a sea of digital content, and we will lose the ability to believe our own eyes.
More from AI
Related Articles

Anthropic is having a moment in the private markets; SpaceX could spoil the party
Apr 4
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra
Apr 3
"Cognitive surrender" leads AI users to abandon logical thinking, research finds
Apr 3





