MENLO PARK, Calif. — The AI scam threat is escalating as Meta rolls out new artificial intelligence tools designed to detect and block increasingly sophisticated online fraud across WhatsApp, Facebook, and Messenger. Meta said the new protections are aimed at scam tactics that are becoming harder for users to spot as criminals use AI to imitate trusted people, automate outreach and scale deception across major social platforms.
The company announced the measures on March 11, saying it is adding device-linking warnings on WhatsApp and new alerts for suspicious friend requests on Facebook. Meta also said it removed more than 159 million scam ads in 2025 and took down 10.9 million Facebook and Instagram accounts linked to criminal scam centers, underscoring the scale of the problem facing global platforms.
The latest update positions the story squarely inside the tech industry’s broader race to contain AI-powered fraud. For users, the issue is no longer limited to crude phishing emails or obvious fake pages. Scams are increasingly polished, personalized, and urgent, making them more likely to trigger financial loss before a victim stops to verify what is happening.
How Meta says the new tools work
Meta said one new WhatsApp safeguard is designed to warn users when a request to link their account to another device appears suspicious. That matters because compromised or hijacked accounts can be used to exploit trusted contacts, spread fake requests for money or pull victims into wider fraud schemes.
On Facebook, the company said it is testing alerts for friend requests that may show signs of impersonation or suspicious behavior. Meta said the effort is part of a wider push to detect scam patterns earlier, remove bad actors faster, and create more friction before a user engages with a fraudulent account.
The company also tied its latest measures to cooperation with industry partners and law enforcement, signaling that platform moderation alone is not enough to disrupt organized scam operations. The latest push also comes as Meta continues to face scrutiny over Meta’s broader platform policy changes and how it manages trust and safety across its services.
Why the AI scam threat is getting harder to detect
The central problem is that AI has lowered the cost and difficulty of deception. Tools that can generate convincing text, mimic speech patterns, or help tailor messages to specific targets are making scams faster to launch and easier to personalize.
The U.S. Federal Trade Commission has warned that voice cloning can make requests for money or sensitive information seem far more believable, especially when a caller appears to sound like a relative, supervisor or other trusted person. That warning has become more relevant as AI-generated audio and synthetic content move closer to mainstream consumer use.
The U.S. Postal Inspection Service has also warned consumers about scams using artificial intelligence, advising the public to ignore and delete messages that demand a quick decision or pressure people to send money. That advice highlights a key reality of modern fraud, speed and emotional pressure are often central to the scam itself.
Why this matters beyond Silicon Valley
The AI scam threat is not just a Silicon Valley problem or a niche cybersecurity topic. It is now a mainstream tech issue affecting everyday communication, online trust, and the safety of digital platforms used by billions of people for family contact, commerce and work.
For Meta, the announcement is also about credibility. The company is trying to show users, regulators and advertisers that it can respond to fraud with stronger built-in protections rather than relying only on users to recognize danger on their own. The more scams begin to look like normal conversations, the more pressure platforms face to detect bad behavior before a victim clicks, responds, or pays.
That makes this story bigger than one software update. It is part of a larger test of whether leading tech companies can keep pace with AI-driven abuse while continuing to expand the tools and networks people rely on every day. For now, Meta’s latest move suggests the industry sees scam prevention as one of the clearest and most urgent front lines in the wider battle over artificial intelligence and platform trust.



























