AI13 min readTechCrunch AI

Patreon CEO calls AI companies’ fair use argument ‘bogus,’ says creators should be paid

P
Redakcja Pixelift9 views
Share
Patreon CEO calls AI companies’ fair use argument ‘bogus,’ says creators should be paid

Amy Price/SXSW Conference & Festivals / Getty Images

Jack Conte, head of the Patreon platform, sharply criticized companies' justification for using creative works without consent as "fair use." During the SXSW conference in Austin, Conte emphasized that he is not opposed to artificial intelligence, but considers the fair use argument "nonsense." In his view, creators should be compensated for materials used to train AI models. The problem affects millions of artists, writers, and musicians whose works are scanned and analyzed by algorithms without their knowledge or consent. Patreon, a platform connecting creators with fans and financial supporters, directly feels the effects of this practice — its users lose control over their intellectual property. Conte's position reflects a growing conflict between the AI industry and creators. While technology companies argue that training models on publicly available data is legal use, creators demand the right to decide the fate of their work and receive payment for it. This issue will be crucial for the future of relations between AI and the creative industry.

Jack Conte, CEO of Patreon, does not come to the discussion about artificial intelligence from the position of a Luddite or a nostalgic person. He runs a technology platform that itself uses advanced tools. However, at the SXSW conference in Austin, he clearly drew a line that he believes the AI industry crosses without scruples. According to him, companies such as OpenAI or Anthropic should not be able to train their models on artists' work without compensation, and their invocation of "fair use" is an argument that collapses under the weight of facts. This position matters — Patreon is a platform where over 250 thousand creators earn money, from illustrators through musicians to writers. If this data reaches AI models without compensation, the business of these people could undergo fundamental changes.

The discussion about fair use in the context of AI is not theoretical academic considerations — it is a battle over who has the right to profit from someone else's work. Conte sees a contradiction in this that is hard to ignore: these same companies that argue they can freely train models on independent creators' content, at the same time pay significant sums to publishers such as Axel Springer or News Corp for access to their materials. This hypocrisy should be clear even to someone not particularly interested in technology.

The position of Patreon's CEO is not isolated, but it has special weight due to his company's position in the market. Patreon is not just a platform — it is an economic ecosystem that shows how contemporary creators really earn money. Data from there could be a valuable voice in the ongoing regulatory discussion, especially in the United States, where the issue of copyright in the age of AI remains unclear.

Contradiction in AI strategy: free data from individual creators, paid data from publishers

Conte's observation hits at the heart of the biggest hypocrisy in the AI industry. OpenAI, Google DeepMind and other corporations justify the use of billions of web pages, images and texts as "fair use" — an exception from copyright law provided for transformative activity. The argument goes: if AI generates something new, original, then it does not violate the copyright of the source.

The problem appears when we look at what these same companies do with content from large publishers. In 2023, OpenAI entered into agreements with Axel Springer (publisher of "Der Bild" and "Politico"), News Corp (owner of "Wall Street Journal" and "Financial Times"), and recently also with Associated Press. For each article, each note, each photograph they pay millions of dollars. If fair use really were applicable, why pay? The answer is simple: fair use is grayer than AI apologists think, and large publishers have legal resources to defend themselves.

Independent creators on Patreon, Medium or Substack have completely different capabilities. They cannot afford lawyers, they have no media influence, their voice is lost in the noise. That is why their data is easy prey. This is not a coincidence — it is a strategic choice by the AI industry. Training models on the content of billions of people without their consent and without payment is cheaper than negotiating with each publisher individually.

Conte is right to point out this asymmetry. If fair use were truly fair and universal, it would be applied to everyone without exception. The fact that it is not suggests that AI companies know perfectly well that their legal position is weak — they simply count on no one defending themselves.

How fair use became a shield for the AI industry — and why it shouldn't be enough

The doctrine of fair use in American copyright law comes from a time when "transformative" usefulness meant parody, review or scientific research. Four criteria — purpose of use, character of the work, amount of material used and impact on the market — were able to cover most real cases. But AI changes the equation in a way that 1970s copyright law simply did not anticipate.

The argument of AI companies is based on the assumption that training a model is a "transformative use" — that the model generates new content, rather than simply reproducing the original. It sounds reasonable until you think about what it really means. A model trained on billions of articles, images and songs learns statistical patterns from these works. So the question is: is extracting statistical patterns from someone else's work without changing it "transformation"? For a copyright lawyer — not necessarily.

The thing is, no U.S. court has yet ruled definitively on this matter. There are lawsuits — several against OpenAI, several against Meta and Google — but verdicts could take years. In the meantime, the AI industry operates in a gray area, counting on most creators not having the resources to defend themselves. Conte suggests that this calculation should be impossible — that the law should be clear and enforceable for everyone, not just for large publishers.

It is worth noting that even among copyright experts opinions are divided. Some believe that fair use should indeed protect AI training. Others argue that it is a complete abuse of the doctrine. Conte takes the position that the law should be changed to require compensation for creators — not just for large publishers, but for everyone whose work entered the training data.

Patreon as a witness: what the platform's data tells us

Why does Conte's voice have particular weight? Because Patreon is one of the few places where we can see the real ecosystem of independent creators in numbers. The platform has over 250 thousand active creators — from illustrators to musicians, writers, podcasters. Most of them earn less than a thousand dollars a month. For them, every income matters.

When this data reaches AI models, these creators lose potential revenue in many ways. First, the model can generate content "in their style," reducing demand for their work. Second, they may lose the ability to control where and how their work is used. Third, they may be completely eliminated from the value chain — instead of paying them for their work, the corporation pays for access to an AI model that replaces their work.

This is not a theoretical threat. We already see artists who appear in Google Images search results, and their work is used to train models without their consent. We see musicians whose voices are cloned by AI. We see writers whose style is imitated by chatbots. Patreon has direct access to these creators — it knows what they say, what their concerns are, what their real losses are.

If Conte lobbies for legal changes, he will have access to data showing the real impact of AI on independent creators. This could be key in negotiations with regulators and in courts. Patreon could be a witness that changes the game.

Why "fair use" doesn't work when it comes to AI — practical problems

Beyond theoretical arguments, Conte points to practical problems with the fair use argument in the context of AI. The first of these is scale. When an author parodies a musical work, they use a fragment — perhaps a few dozen seconds from a three-minute song. When AI trains on billions of works, it uses entire files, entire albums, entire books. The fourth criterion of fair use — impact on the market of the original — is clear here: if an AI model can generate music in an artist's style, the impact on the market for their work is enormous.

The second problem is lack of transparency. Fair use traditionally protects users who can demonstrate that they acted in good faith, for educational or research purposes. But AI companies do not disclose exactly what data they use for training. It is not known whether they trained on your blog, your photo, your song. How can one speak of fair use when the injured party cannot even know they have been injured?

The third problem is commercial use. Fair use traditionally does not protect a user who profits from the use of someone else's work. But AI companies earn billions on models trained on billions of works. If fair use does not protect a YouTuber who monetizes a video parodying a song, why should it protect OpenAI, which earns money on ChatGPT trained on billions of works?

Conte articulates this clearly: fair use is a tool for the weak — for people who cannot afford licenses, for scientists, for students. When the tool is used by one of the richest companies in the world to earn billions, that is not fair use — that is expropriation.

Precedents: what we can learn from the past

The history of copyright shows that when technology changes the market, the law always changes — sometimes in favor of creators, sometimes against them. When photographers appeared, portrait painters thought it was the end of their careers. When film appeared, theater actors had concerns. When the internet appeared, publishers thought it was the end of books. New opportunities for earning always appeared, but it always took fighting to ensure that creators would earn fairly.

A more direct precedent is digital music. When Napster appeared in 1999, it allowed users to share MP3 files without the consent of artists. The music industry fought — and won. Courts ruled that even if file sharing could be considered "fair use" in some contexts, it was not in the scale Napster was doing it. The result was a change in how people consume music — but artists were guaranteed compensation.

Another example is Google Books. Google scanned millions of books without the consent of authors, arguing fair use. Courts agreed that scanning was transformative — it enabled searching, not reproduction. But even then there were many disputes, and ultimately there were settlements and damages. No precedent suggests that you can freely use someone else's work on the scale that AI does.

Conte seems to suggest that history is repeating itself — that the AI industry is counting on creators being too weak to defend themselves, but that ultimately the law will change. The question is: will it change fast enough to save the careers of independent creators, or will it be a post-facto change, when the damage is already done?

What should change: a licensing model instead of fair use

Conte not only criticizes — he also suggests a solution. Instead of relying on fair use, the AI industry should move to a licensing model, similar to the one it already uses with large publishers. Every creator whose work enters the training data should have the right to compensation — either in the form of a one-time payment or as a share of the model's profits.

How could this work? There are several possibilities. The first is a central database — every creator could register their work, and AI companies would be required to check the database before training. The second is automatic systems — similar to those YouTube uses to recognize music, could identify copyright-protected works and automatically allocate compensation. The third is an "opt-out" model instead of "opt-in" — by default content would be protected, unless the creator decides otherwise.

All these solutions would be more expensive for AI companies than the current "take everything and pay no one" model. But Conte argues that this is precisely the point — that the industry should bear the cost of what it does, instead of passing it on to creators. If licensing content from publishers is profitable, it should be profitable for independent creators as well.

This position has support among many creators, but also opposition. Some argue that requiring licensing would slow down innovation in AI. Others believe it would be technically impossible to implement. But Conte responds: if it is possible for publishers, it is possible for everyone. If it is not possible for everyone, it means the system is unfair.

Implications for Polish creators and local platforms

The discussion taking place in the United States has direct consequences for Polish creators. Many Polish platforms — from Patronite to local services for musicians and artists — use the same business models as Patreon. If copyright law is changed in the USA in favor of creators, that change will have global impact. Conversely, if AI companies win this battle in America, they will have a precedent to apply the same model everywhere else.

Poland has its own scene of digital creators — illustrators, musicians, writers who earn through platforms such as Patreon or Patronite. Their content is already being trained by global AI models without their consent and without compensation. If there are no changes in the law, the Polish creative scene will be one of the first to suffer — because our market is too small to attract the attention of large AI companies, but large enough to be included in training data.

This is also an opportunity for Polish regulators. If the European Union wants to, it can introduce more protective copyright provisions in the context of AI — and in fact, the DSM directive already contains several provisions in this direction. But the implementation of these provisions will depend on whether creators have a voice in the discussion, and Conte shows how such a voice should sound.

The future: will the AI industry change course, or will it have to be forced?

Recent months show that the AI industry is ready for change — but only when forced to do so. OpenAI entered into agreements with publishers not because it suddenly became ethical, but because publishers have legal resources to defend themselves. If independent creators have similar resources — through organizations like Patreon that can represent them collectively — change will be inevitable.

Conte seems to understand this perfectly. His position at SXSW was not accidental — it was a signal to the industry that Patreon will fight for its users. It is also a signal to regulators that there is an organization ready to cooperate that can help enforce new regulations.

The game is playing out now. If Conte and other platforms push, if independent creators protest, if regulators pay attention — there is a chance that the law will change. But if the AI industry is able to operate without obstacles, if creators are too scattered to organize — then fair use will remain a shield for expropriation. The outcome is open, but one thing is certain: the discussion that Conte initiated will shape the future of creative work in the age of AI.

Source: TechCrunch AI
Share

Comments

Loading...