Models Are Applying to Be the Face of AI Scams

Foto: Wired AI
Will Models and Influencers Become Tools for AI Fraud? A Growing Number of Professional Actors and Models Are Signing Up to Generate Deepfake Content That May Be Used in Illegal Phishing Campaigns. According to the Latest Research, More and More People in the Creative Industry See Financial Potential in Creating Synthetic Images for Fraudulent Platforms. This Phenomenon Raises Serious Concerns About Digital Ethics and Safety. Professional Models Offer High-Quality Video Materials That Can Be Used to Create Credible False Online Identities. Content Creators Often Do Not Realize the Potential Legal and Moral Consequences of Their Involvement. Cybersecurity Experts Warn That This Practice May Significantly Increase the Scale of Internet Fraud, Making It Harder to Identify. Technology Platforms and Law Enforcement Will Have to Develop New Strategies to Counter These Advanced Forms of Disinformation. It Can Be Expected That the First Legal Regulations Aimed at Limiting This Practice Will Emerge in the Coming Months.
In the shadow of rapidly developing AI technology, a disturbing trend emerges that may threaten the safety of thousands of people on the internet. An increasing number of channels on Telegram are offering work to models who will become the faces of AI-powered fraud.
A New Face of Cybercrime
An analysis conducted by journalists revealed that dozens of Telegram channels are publishing job offers for "AI models". However, behind the seemingly innocent job offer lies a dangerous practice of defrauding people using computer-generated images.
Most offers are directed at young women who are asked to share their photos and document scans. Their likenesses are then used to create fake profiles aimed at extorting money from unsuspecting victims.
Read also
How Do Scammers Operate?
The mechanism of the scam is terrifyingly simple:
- Recruiting models through Telegram channels
- Collecting their photos and personal data
- Generating realistic images using AI tools
- Creating fake profiles on dating and social media platforms
- Establishing contact with potential victims to extort money
Technology Against People
Technological progress in artificial intelligence makes such scams increasingly difficult to detect. Face generation models are now so advanced that they can create an almost perfectly realistic image.
For Polish internet users, this means the need to increase vigilance and take a critical approach to people met online. Cybersecurity experts recommend exercising particular caution when making online acquaintances.
Scale of the Phenomenon
Although the exact scale of the practice is unknown, experts estimate it may involve hundreds or even thousands of fake profiles. Those most at risk are lonely individuals seeking online relationships and older people who may have difficulty recognizing digital manipulations.
Protection Against Fraud
To protect yourself from such threats, it is advisable to:
- Verify the identity of people met online
- Be suspicious of overly beautiful or perfect profiles
- Never send money to strangers
- Use image verification tools
The Future of Cybersecurity
The development of AI technology requires continuous adaptation of methods to protect against cybercrime. Close cooperation between security institutions, social platforms, and technology creators will be necessary to effectively counteract such threats.
The coming years will show whether we are able to develop effective mechanisms to protect against increasingly sophisticated methods of fraud using artificial intelligence.









