Why can’t TikTok identify AI generated ads when I can?

Foto: The Verge AI
Samsung and TikTok, despite being members of the Content Authenticity Initiative and declaring support for C2PA standards, continue to fail regarding the transparency of AI-generated content. Although both companies officially promote the labeling of synthetic materials, in practice, Galaxy S24 Ultra advertisements are reaching users without the required markings. Interestingly, the same spots published on YouTube contain clear information about the use of AI in their descriptions, while identical content on TikTok remains unlabeled, violating the platform's internal regulations. The problem affects not only tech giants but also smaller players like the British company Cazoo, whose advertisements—filled with visual artifacts such as morphing dental tools—only received the "advertiser labeled as AI-generated" tag after some time. According to TikTok's policy, any "significant modification" of image or sound requires disclosure; however, verification mechanisms are proving to be leaky. For users, this means growing information chaos and the necessity of personally spotting errors in generated images to distinguish truth from digital creation. The lack of consistency between platforms undermines trust in automated AI detection systems, turning transparency into an empty marketing slogan rather than a real safety standard.
In a world dominated by algorithms, the line between reality and generative illusion is blurring faster than regulatory systems can react. While tech giants race to issue declarations regarding transparency and the ethical implementation of artificial intelligence, daily practice on platforms like TikTok exposes deep cracks in this system. The problem no longer concerns only amateur edits but hits the advertising sector directly, where powerful brands, including Samsung, seem to be bypassing their own standards for labeling AI content.
The situation is paradoxical: experienced industry observers easily spot digital artifacts indicating the use of generative AI, while the platforms' official mechanisms remain deaf and blind. This is not a matter of a lack of technology, but a lack of consistency in communication between advertisers and service providers. When companies declaring support for initiatives such as C2PA (Coalition for Content Provenance and Authenticity) fail at the stage of publishing a simple promotional clip, the entire idea of "safe AI" is called into question.
Double standards of the giants: Samsung and the lack of labels
An analysis of promotional campaigns for the Galaxy S26 Ultra model shows a disturbing inconsistency in information policy. Video materials presenting the privacy display feature, which were posted on the YouTube platform, contained clear declarations in their descriptions about the use of AI tools. However, the same videos, broadcast as paid advertisements on TikTok, were stripped of any markings. The user receives a final product that looks synthetic but officially appears as traditional material.
Read also
What is particularly striking is that both Samsung and TikTok are members of the Content Authenticity Initiative. This is a group striving to make content authenticity a scalable and universal standard. Theoretically, both sides are playing for the same team, promoting C2PA standards. In practice, the process of conveying information about the nature of the material has failed. If Samsung consciously used AI, it should have informed the platform when submitting the advertisement. If TikTok received such information, its duty—according to its own policy—was to apply the appropriate label.

The definition of "significant modification" according to TikTok
TikTok's terms of service for advertisers are, in theory, quite restrictive. The platform requires disclosure of the fact that AI was used when content has been "significantly modified." This means going beyond minor adjustments and includes scenarios such as:
- Creating images, video, or audio that are entirely generated by AI.
- Depicting a main character performing actions they did not do in reality (e.g., dancing).
- Using AI voice-cloning to make a character say lines they never uttered.
Despite such clear guidelines, the enforcement system remains full of holes. In the case of the Samsung advertisements, both companies refused to issue official statements explaining why the transparency process failed. TikTok limited itself only to pointing out its general rules and partnership with C2PA, which, in the face of concrete evidence of a lack of labels, sounds like avoiding responsibility for its own advertising ecosystem.
Visual glitches as the only early warning system
Faced with the silence of corporations, users are forced to rely on their own perceptiveness. An example is the advertisements for the British used car retailer Cazoo. These materials contained obvious visual distortions—such as a dental drill changing shape and jumping between hands—which clearly indicated generative errors. Only after direct inquiries and interventions did these advertisements begin to display the message: "advertiser labeled as AI-generated."

Currently, transparency on Samsung's profiles on TikTok is organizational chaos. Some videos have the platform's systemic label, others have information added in small print in the description, and still others—despite their obvious digital origin—have no marking at all. This shows that systems such as Google's SynthID or Content Credentials are useless if key players do not apply them in a consistent and honest manner.
Advertising is a heavily regulated industry aimed at protecting the consumer from being misled. Just as the law prohibits attaching false eyelashes in mascara advertisements, it should approach synthetic product images with the same rigor. If the European Union, China, and South Korea are introducing AI labeling requirements, then tech giants ignoring them on global platforms is playing with fire regarding upcoming regulations and massive financial penalties.
"If large platforms and global advertisers cannot be honest with each other in such a regulated environment, it opens the door to flooding the internet with any unverified nonsense."
The end of the era of voluntary declarations
The current state of affairs proves that relying on corporate goodwill regarding the labeling of AI slop (low-quality generative content) is a strategy destined for failure. A system in which the user or journalist must point out errors to the platform for it to graciously apply a label is inefficient. The industry needs automated content provenance detection that will function regardless of what a company's marketing department declares.
I predict that in the near future, there will be a brutal collision between the tech world and regulatory bodies. Advertising is not a space for artistic expression where a "lack of label" can be explained by aesthetics. It is an area of commercial responsibility. If TikTok and Samsung do not tighten their processes, they will lose something much more valuable than reach—the remnants of trust from audiences who increasingly look at every advertisement in their feed with deep suspicion.








