AI videos of sexualised black women removed from TikTok after BBC investigation

Foto: BBC Tech
A single video, in which the face of a Malaysian model was replaced with an AI-generated image of a Black woman with unnaturally dark skin, has garnered over 173 million views on Instagram. An investigation by the BBC and researchers from the service Riddance revealed the scale of a practice where artificial intelligence is used for the mass creation of sexualized, racist avatars. Following journalist intervention, TikTok has already removed 20 accounts violating community standards; however, Meta, the owner of Instagram, is still investigating dozens of similar profiles. An analysis of 60 accounts showed that these digital characters are being used to drive traffic to paid pornographic content services. The creators of these profiles employ Deepfake techniques, stealing movements and backgrounds from recordings of real influencers and then overlaying them with exaggerated, hyperbolic physical traits based on harmful racial tropes. The practical implications for users are alarming: AI technology now allows for mass visual identity theft and the monetization of one's likeness without the original subject's consent. The lack of mandatory labeling for digitally generated content means that millions of viewers are unable to distinguish truth from manipulation. While the development of these tools democratizes the content creation process, it simultaneously provides a powerful tool for digital exploitation, against which current moderation systems fail to offer sufficient protection.
The boundary between reality and synthetic manipulation on social media has finally blurred. An investigation conducted by the BBC revealed the scale of a practice where generative artificial intelligence serves not only to create false identities but becomes a tool for systemic exploitation and image theft. Dozens of accounts on Instagram and TikTok platforms used avatars of Black women to promote sexual content, often based on racist stereotypes and illegally obtained materials from real creators.
The scale of the problem is staggering: in just one investigation, 60 accounts were identified that built reach based on hyper-realistic, though anatomically exaggerated, images of AI-generated influencers. The mechanism of operation was precise—these profiles served as sales funnels, directing millions of users to external sites with paid, pornographic content. This is no longer just a matter of technical curiosity, but the birth of a "digital porn industry" that preys on the lack of transparency in algorithms and the failure of moderation systems of tech giants.
Digital Colonialism and AI Fetishization
An analysis of accounts conducted by researchers from the independent publication Riddance sheds light on a disturbing aesthetic trend. Avatars created by algorithms are characterized by unnaturally dark skin tones, exaggerated body proportions, and are embedded in narratives based on so-called race-play. Profile names containing words such as "noir," "ebony," or "dark" in combination with descriptions suggesting submissiveness to specific ethnic groups clearly indicate the deliberate targeting of fetish niches. This is a phenomenon that experts call a modern form of caricature, stripping Black women of their agency in favor of mass-generated code.
Jeremy Carrasco, an AI trend analyst, notes that this technology allows for the removal of any barriers of shame or social consequences. An avatar has no feelings, cannot feel humiliated, which emboldens creators to cross further boundaries in the dehumanization of imagery. Using AI to manipulate skin color to achieve effects impossible to find in nature creates standards of beauty that are not only unattainable but downright grotesque. For real models and creators, such as Houda Fonone, it is a process of "erasing" authenticity in favor of a synthetic, "refined" version of Black femininity that is acceptable to the algorithm only when it fits a prescribed template.
Identity Theft on an Industrial Scale
The most drastic aspect of the practice is Deepfake technology used to steal the work of real people. An example is the story of Riya Ulan, a model from Malaysia, whose videos were unlawfully taken over by one of the AI accounts. The creators used her body, movements, clothes, and background, overlaying her face with a synthetic mask with altered features and skin color. The result? While the original enjoyed moderate popularity, the manipulated version gained 173 million views on Instagram and 35 million on TikTok.
- Reach exploitation: AI accounts can generate reach 50 times greater than original creators thanks to aggressive optimization for algorithms.
- Lack of transparency: Despite platform requirements, most accounts did not label content as "AI-generated," misleading millions of users.
- Link chains: These profiles rarely publish explicitly pornographic content directly on the platform, using a "link-in-bio" system to bypass automated moderation filters.
This phenomenon exposes a fundamental weakness of modern social media: copyright and image protection in the face of generative AI practically do not exist. Riya Ulan repeatedly reported the theft of her materials, but the platforms only reacted after the intervention of journalists. This proves that user reporting mechanisms are ineffective in the face of mass production of synthetic content.
Giant Reaction and Systemic Impotence
Following the publication of the investigation results, TikTok took radical steps, removing 20 accounts and declaring "zero tolerance" for the promotion of sexual services and the use of third-party images without their consent. Platform representatives emphasize that their regulations require clear labeling of AI content that is realistic. However, the problem is that moderation happens post-factum. Before an account is removed, it has already generated millions of views and redirected thousands of users to paid services, fulfilling its business objective.
Meta (owner of Instagram) reacted much more cautiously, limiting itself to a statement about conducting an investigation. Although some of the identified accounts disappeared from the service, many similar profiles still function, balancing on the edge of the regulations. The problem concerns not only the images themselves but the entire ecosystem of bots that like and comment on each other, deceiving recommendation algorithms and making synthetic profiles appear more "human" and trustworthy to the average recipient.
"The creation of caricatures and the use of race-play terminology proves that the creators of these tools do not care about safety, but about the possibility of monetizing Black women within the internet pornography machine" — Angel Nulani, Riddance researcher.
Technology as a Tool of New Oppression
Analyzing this case, we must go beyond simple moral outrage. We are dealing with a new business model in which AI drastically lowers the cost of producing "on-demand" content for specific, often harmful preferences. The ability to generate thousands of photos and videos within a few hours makes traditional moderation methods based on user reports an anachronism. If platforms do not introduce automatic detection of synthetic media at the upload stage, we will witness a flood of internet content that not only steals images but also actively destroys the reputations of real people.
The rise in popularity of AI influencers in the adult industry is a warning signal for the entire creative sector. The next stage will be not only the theft of image but also the theft of voice and unique style of being, leading to a situation where real creators will have to compete with their own, "improved" and more submissive digital clones. Without tough legal regulations at a global level that force AI model producers and platform owners to ensure full traceability of synthetic content, the social media market will become a toxic swamp of manipulation.
The future of content moderation must be based on cryptographic confirmation of material origin (so-called Content Provenance). The only effective method of fighting the plague of fake avatars is the systemic rewarding of content verified as "human-made" and the automatic restriction of reach for everything that does not have a digital certificate of authenticity. Otherwise, algorithms promoting engagement at all costs will continue to favor synthetic caricatures at the expense of the truth and dignity of real people.
More from Research
Related Articles

Luke Littler applies to trademark his face to combat AI fakes
Mar 20
Publisher cancels horror novel's release over AI claims
Mar 20
The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot
Mar 20




