AI Training Data Giant Mercor Is Reportedly Looking to Buy the Work You Did at Your Old Job

Foto: Robot And Woman Working On Laptop In Office © Valerii Apetroaiei via Getty
Up to $500 for access to old design files and documents from previous jobs – this is the offer being made to users by training data giant Mercor, which is putting a price on employees' intellectual output for the purpose of AI development. According to reports from the Wall Street Journal, the company is actively seeking high-quality materials that can be used to train large language models (LLMs), offering financial gratification to individuals willing to share their archives. For the global community of specialists, this is a tempting but risky opportunity for extra income. The practice is sparking significant legal controversy, as most employment contracts contain rigorous intellectual property clauses that transfer rights to all created content to the employer. Selling such data by former employees could result in serious legal consequences, including accusations of theft of trade secrets. From a technological perspective, this trend demonstrates how desperately AI companies need unique, human-generated data to combat model hallucinations. However, users must remember that a one-time transfer from Mercor could mean permanently burning professional bridges and entering a legal battle with powerful corporations that have no intention of relinquishing the rights to their assets.
The market for data to train artificial intelligence models currently resembles a gold rush where traditional deposits — the publicly accessible internet — are beginning to rapidly dry up. Faced with copyright infringement lawsuits and bot-blocking by media giants, tech companies are seeking new, increasingly controversial sources of "fuel" for their algorithms. As reported by the Wall Street Journal, the data acquisition giant Mercor has decided to go a step further by reaching out directly to employees. The proposal is simple and simultaneously shocking: sell us the fruits of your work from previous companies, and we will pay you for it.
This approach sheds entirely new light on the intellectual value we generate throughout our professional careers. Mercor is no longer just looking for publicly available Reddit posts or Wikipedia articles. The company is targeting high-quality, specialized data: source code, internal analyses, technical documentation, or marketing strategies that were never intended to leave corporate servers. For many individuals who feel financially undervalued by former employers, this may sound like a tempting opportunity to "settle the score," but lawyers and ethics experts warn that we are entering exceptionally treacherous ground.
Trading intellectual property in the shadow of the NDA
The mechanism of Mercor's operation is based on the assumption that individual employees possess unique insights into creative and technical processes that are invaluable for training Large Language Models (LLM). The problem is that in almost every modern employment contract, there is a clause stating that all work products generated during business hours or using company resources belong exclusively to the employer. An attempt to monetize these materials by a former employee is not only a breach of professional ethics but, above all, a direct violation of non-disclosure agreements (NDA) and trade secret protection laws.
Read also

The actions of Mercor demonstrate the desperation of the AI sector in its quest for a competitive advantage. Models such as GPT-4 or upcoming iterations from Anthropic require increasingly complex data to move beyond the stage of generating simple texts. Paying private individuals for access to their professional "archives" is an attempt to bypass official data licensing channels, which are expensive and bound by restrictive legal terms. For corporations whose data might leak into competitors' models this way, it is a worst-case scenario.
A moral and legal gamble at the employee's expense
While Mercor offers real money for providing materials, the risk rests entirely on the shoulders of the person selling the data. If algorithms trained on stolen code or documents begin to generate content that too closely resembles the source materials, tracing the leak's origin by the legal departments of major corporations will only be a matter of time. In an era of advanced metadata analysis and digital watermarking, anonymity in such transactions is illusory. A user who decides to take such a step risks multi-million dollar damages and permanent exclusion from the job market.

From the perspective of artificial intelligence development, this knowledge acquisition model also raises qualitative doubts. Mercor is creating a kind of "black market" for data, where verification of authenticity and ownership rights is nearly impossible. If the AI industry begins to rely on materials obtained in a legally questionable manner, the entire legal structure surrounding models like Llama or Claude could collapse under the weight of class-action lawsuits. It is no longer just about whether AI "steals" art from the internet, but whether it actively corrupts employees into leaking corporate secrets.
- Legal risk: Violation of NDAs and copyrights belonging to the employer.
- Mercor's goal: Acquiring high-quality specialized data not available publicly.
- Consequences for the industry: A potential wave of lawsuits against AI companies for receiving stolen intellectual property.
- Ethics: Exploiting the financial frustration of employees to bypass market standards.
A new era of digital industrial espionage
What Mercor is proposing can be called, without exaggeration, the democratization of industrial espionage. In the past, stealing blueprints or code required advanced operations or infiltration. Today, all it takes is a high enough offer and a form on the website of an AI training company. The line between "collaborating on technology development" and acting to the detriment of a former employer is becoming dangerously blurred. It is worth asking whether any amount offered by Mercor can compensate for the loss of reputation in an industry where trust is a key currency.
This phenomenon will force tech companies to radically change their approach to data security after the termination of employment. We can expect even more restrictive device-wiping procedures, monitoring of cloud activity during the final weeks of employment, and aggressive prosecution of any attempts to transfer intellectual property. The Mercor initiative, instead of helping employees "get back" at corporations, may lead to an atmosphere of mutual distrust that will hit the entire creative and engineering community.
In my assessment, the aggressive data acquisition methods by entities such as Mercor will lead to swift intervention by regulators. Currently, we are moving in a gray area, but precedent-setting court rulings in cases of intellectual property theft for AI training purposes are inevitable. Companies building their models on a foundation "bought" from disloyal employees risk that their greatest asset — a unique dataset — will become their greatest legal liability, leading to orders to delete entire model weights from servers.







