Meta to cut back on third-party vendors in favor of AI for content enforcement
Meta will implement advanced AI systems over several years that will take over tasks related to enforcing rules on its platforms. New algorithms will detect fraud, illegal materials, and other violations, replacing current employees of external content moderation firms. The change is part of a broader cost optimization strategy in the company. Artificial intelligence is to work faster and more consistently than moderator teams, though they will still be needed for more complicated cases requiring human judgment. For users, this means potentially faster response to malicious content, but also the risk of a greater number of algorithm errors, particularly in matters requiring cultural context. For the content moderation industry, this is another signal that the role of humans in this area is systematically diminishing. Meta expects that investment in AI will bring long-term savings, though history shows that complete replacement of humans by algorithms in content moderation still faces serious challenges.
Meta is making a strategic decision that could fundamentally change the way content moderation works on its platforms. The company intends to gradually withdraw from hiring external firms to enforce content policy and replace them with advanced artificial intelligence systems. This is not just a simple cost optimization — it is a shift toward technology that promises to be more efficient, but at the same time raises serious questions about the future of human work in the content moderation industry.
Over the past decade, Meta has employed tens of thousands of external company employees who manually reviewed billions of posts, comments, and videos. These people worked in conditions that were often criticized — low pay, no benefits, psychological stress resulting from exposure to brutal and traumatic content. Now Meta wants to replace a significant portion of this workforce with algorithms that will be able to identify fraud, illegal media, spam, and other policy violations with greater precision and speed.
A multi-year project to implement new AI systems has already begun, although its full scope and timeline remain unclear. For the content moderation industry — and especially for Polish companies that employ thousands of workers doing this work — this is a signal that fundamental changes are already underway.
Read also
Why is Meta deciding to make this change?
The reasons for this decision are multifaceted, but economic and efficiency considerations dominate. Hiring external vendors for content moderation is expensive — especially when it comes to the scale of Meta's operations. The platform must monitor content on Facebook, Instagram, WhatsApp, and Threads. The number of posts to review each day is in the billions.
AI systems, once implemented, can work continuously, without requesting vacation, health insurance, or raises. They can process significantly more data than humans and do it faster. In 2024, when large language model (LLM) technology reached a significant level of maturity, Meta sees this as an opportunity to radically change its approach to moderation.
A second important reason is the quality of policy enforcement. Human moderators, despite their dedication, are prone to errors — especially when working under pressure, tired, and exposed to traumatic content. AI, if properly trained, can offer more consistent decisions and better handle the subtleties of certain types of violations.
Meta also has access to enormous historical datasets — years of moderation conducted by humans. This is excellent training material for AI models. Each moderator's decision can be used to improve algorithms, creating a feedback loop in which the system becomes increasingly better.
What moderation tasks will algorithms take over?
According to available information, AI will be responsible for a wide range of content policy enforcement tasks. The first targets are fraud, illegal media, spam, and human trafficking violations. These are not marginal problems at all — fraud on social media platforms is a multi-billion dollar global problem.
Fraud identification is particularly interesting from a technical perspective. Scammers often use advanced tactics to bypass filters — they falsify identities, create fake profiles, impersonate known brands. Traditional rule-based systems do not handle this variability well. Modern AI models, built on transformer architectures, can learn to recognize patterns in user behavior, post structure, and connections between accounts that indicate fraud.
Illegal media — particularly material related to child exploitation — is a priority for Meta for legal and ethical reasons. AI systems, trained on large datasets of images and videos, can flag potentially illegal content faster than humans. Meta has been using PhotoDNA technology and similar solutions for this purpose for years, but new AI systems will be able to work in a more advanced way.
Spam, while it may seem less serious than fraud or illegal media, is nevertheless a huge problem for users. Algorithms can learn to recognize the characteristic features of spam — repeated messages, links to suspicious sites, posting patterns typical of bots — and remove them on a massive scale.
Implications for employees and the content moderation industry
For tens of thousands of content moderation workers worldwide, Meta's decision is potentially catastrophic. The Polish BPO (Business Process Outsourcing) industry employs thousands of people at companies such as Telus, Appen, Lionbridge and many smaller vendors who handle content moderation for Meta and other platforms. If Meta truly reduces orders for moderation services, it will have a cascading effect on these companies and their employees.
It is worth noting, however, that the process will take several years. Meta will not shut down all moderators at once. First, it will test new AI systems, compare their performance with human moderation, iterate, and refine. This means the transition will be gradual, but the ultimate direction seems clear.
For the content moderators themselves, this change could be liberating — this work is considered one of the most psychologically difficult in the technology industry. Exposure to violence, pornography, fraud, and other traumatic content causes many moderators to suffer from post-traumatic stress disorder (PTSD). If AI truly takes over a significant portion of this work, it could improve the quality of life for thousands of people. At the same time, for those who are financially dependent on this work, especially in lower-income countries, it will mean job loss.
Technical challenges and limitations of AI in content moderation
Despite promising capabilities, AI systems in content moderation have serious limitations. The first challenge is context and edge cases. Content moderation is not simple binary classification — many situations require deep understanding of cultural, historical, and social context.
For example, a post containing hate speech may be a fragment of a song, a quote from a book, a critical comment, or genuinely hateful content. A human moderator can understand this by reading the entire conversation and understanding the author's intent. AI, even advanced AI, may have difficulty with such subtleties. There is a risk of both false positives (removal of legal content) and false negatives (missing actual violations).
The second challenge is the evolution of violation tactics. Scammers, hackers, and those spreading illegal content are constantly adapting. As soon as they learn how the AI system works, they will change their methods. This creates an arms race — Meta will need to constantly update and refine its models to keep up with new threats.
The third problem is systematic errors in training data. If the historical data on which AI is trained contains biases — for example, if moderators were more inclined to remove content from certain cultural or religious groups — then the AI model will reproduce these biases on a massive scale. This can lead to unfair content moderation for certain communities.
Comparison with competitors and industry trends
YouTube, TikTok, and Twitter are also changing their approach to content moderation. However, Meta appears to be more aggressive in this direction, which may be related to its enormous scale and financial resources.
YouTube has for years used advanced algorithms to identify policy-violating material — particularly piracy and adult content. TikTok, being a Chinese platform, has access to the latest AI technologies from China and has aggressively deployed them. Twitter (now X), under Elon Musk's leadership, drastically reduced the number of moderators and began relying more on algorithms and user flagging.
Interestingly, each of these platforms is moving in a similar direction — toward greater automation — but with varying levels of success. Twitter/X experienced a significant increase in unwanted content after the moderation reduction, suggesting that full automation may not be a solution. Meta, having more resources and more advanced technology, may achieve better results, but the question remains whether AI will ever be able to fully replace human judgment in content moderation.
Perspective for the Polish technology market
For the Polish technology and BPO industry, Meta's decision has concrete implications. Poland is one of the largest content moderation outsourcing centers in Europe — companies such as Sykes, Teleperformance, TTEC and local vendors employ thousands of people for this work. A reduction in demand for content moderation services will have a direct impact on these companies.
However, this is also an opportunity for Polish technology companies. Rather than competing in the content moderation services market, they can specialize in building and optimizing AI systems for content moderation. Polish talent in machine learning and data science is respected globally — they can work on developing better algorithms for Meta and other platforms.
Additionally, the Polish BPO industry has a chance to transform. Instead of being a provider of cheap content moderation services, it can become a technology partner, helping platforms implement and refine AI systems. This would require investment in training and a change in business model, but could lead to more sustainable growth and higher margins.
The future of content moderation — a hybrid approach?
The reality is that full automation of content moderation is still a distant goal. Even Meta, with all its resources, will not be able to replace all human moderators with AI — at least not within a few years. A more realistic approach is a hybrid model, where AI handles routine, easily identifiable violations, and humans handle more complex cases requiring judgment.
In such a model, the role of moderators changes — instead of looking at every post, moderators can focus on verifying AI decisions, handling user appeals, and dealing with edge cases. This could reduce the number of moderators needed, but not eliminate them entirely. For the BPO industry, this means a shift toward higher skills and higher salaries, but fewer positions.
With its decision to implement advanced AI systems over several years, Meta is sending a clear signal: the future of content moderation is automated. How quickly this will happen and how well it will work are questions we will get answers to over the next few years. One thing is certain — the content moderation industry is on the brink of transformation.
More from Industry
Tesla faces intensifying NHTSA probe of 'Full Self-Driving' in reduced visibility
OpenAI to acquire developer tooling startup Astral in boost for Codex team
Crypto.com lays off 12% of workforce in latest company to cite AI in job cuts
Uber to invest up to $1.25 billion in EV maker Rivian in deal to launch 50,000 robotaxis
Related Articles

Uber ex-CEO Kalanick rebrands latest venture Atoms, expands into mining and transport
Mar 13
Nvidia may soon unveil a brand-new AI chip. A closer look at the $20 billion bet to make it happen
Mar 13
Elon Musk says xAI must be 'rebuilt' as co-founder exodus continues, SpaceX IPO awaits
Mar 13

