Site icon Propaganda Exposed

AI-Powered Scams Explained: Phishing, Deepfake Vishing, Synthetic Identity & Crypto Fraud

AI‑driven phishing, deepfake “vishing,” synthetic‑identity fraud, and crypto‑related scams
Financial Scams Investigative Report AI & Technology

A retired schoolteacher in Ohio. A small business owner in Texas. A nurse saving for her daughter’s college fees. In 2025 alone, fraudsters using artificial intelligence stripped an estimated $20 billion from Americans — more than half of it in cryptocurrency. The FBI is sounding alarms. IRS criminal investigators are overwhelmed. And the technology making it possible is the same AI touted daily as a wonder of modern progress. This is what the evidence actually shows.

Digital financial fraud concept — binary data overlaid on currency
Artificial intelligence has transformed financial crime from an art requiring skill and patience into an industrial-scale operation. Stock image / Unsplash
$20B Stolen via cyber theft, 2025 Source: FBI / CBS News
50%+ Of fraud now involves AI Source: Feedzai, 2024
700% Rise in deepfake incidents Source: Deloitte Insights
$40B Projected AI fraud losses by 2027 Source: Deloitte Insights

The Industrial Scale of the Problem

Kyle Holder’s story is not unusual — it is now routine. The Ohio woman lost her entire $300,000 life savings to a fraudster who used AI to establish months of convincing, emotionally calibrated contact before directing her funds into untraceable cryptocurrency wallets. The scheme took less than three months. Recovery was zero.

CBS News reported in April 2026 that IRS criminal investigators describe the volume of AI-assisted crypto fraud as “massive” and accelerating. The FBI’s 2025 figure of $20 billion in cyber theft — more than half of it cryptocurrency-linked — represents a single-year record and a roughly 30% increase on 2024 totals.

The scale matters because it changes the moral framing. This is no longer a story about unusually gullible people encountering unusually clever criminals. It is a story about an infrastructure — tooled, automated, and scalable — that can target thousands of potential victims simultaneously with personalised, plausible deception that no human criminal network could sustain.

“Banks have found these scams incredibly challenging to detect. Their customers pass all the required checks and send the money themselves — criminals haven’t needed to break any security measures.”

Ajay Bhalla, Cyber Intelligence Chief, Mastercard — via ABC News

The Four AI Weapons Now in Active Use

Fraud is not new. What is new is the leverage AI provides: criminals with modest technical skills can now automate, personalise, and scale attacks that previously required large, skilled teams. Four techniques have emerged as dominant.

📷

Deepfake Video & Audio

AI generates real-time video or audio that impersonates executives, family members, or bank officials. Used in “vishing” calls and video-verified wire transfers. Deloitte tracked a 700% rise in deepfake banking incidents since 2022.

👤

Synthetic Identity Fraud

Real Social Security numbers are combined with AI-generated biographical data — photos, browsing histories, credit trails — to create entirely fictitious people who then open bank accounts and take out loans. Thomson Reuters called it “one of the fastest-growing financial crimes.”

💌

AI-Personalised Phishing

Large language models scrape social media to write phishing emails personalised to each target — referencing their employer, their children’s names, their recent purchases. Open rates and click-through rates are dramatically higher than mass phishing. Kaspersky documented a 60% increase in AI-generated phishing attempts in 2024.

🪙

AI Crypto Investment Fraud

Chatbots pose as financial advisers, building trust over weeks before steering victims toward fraudulent cryptocurrency platforms they control. The FBI issued a formal public service announcement in December 2024 specifically about this method.

AI tools have lowered the barrier to entry for financial crime. What once required a team now requires a subscription. — Unsplash

Three Cases That Illustrate the Method

Case 01 — Cryptocurrency romance fraud

The $300,000 Retirement Account

Kyle Holder of Ohio was contacted by someone she believed to be a romantic interest she had met online. Over approximately three months, an AI-assisted persona maintained consistent, affectionate, emotionally intelligent communication — responding at appropriate hours, remembering personal details, mirroring her interests. When the “relationship” reached a point of trust, her contact introduced a cryptocurrency investment opportunity. By the time she realised what had happened, her entire $300,000 in retirement savings was gone.

IRS criminal investigators told CBS News this pattern — known as “pig butchering” — now constitutes the largest single category of AI-assisted financial fraud by dollar value.

Primary source: CBS News, April 2026 ↗
Case 02 — Deepfake executive fraud

The CFO Who Was Never on the Call

In a case documented by Deloitte’s financial services research team, a finance employee at a multinational corporation received a video call from what appeared to be their CFO and several senior colleagues. All were deepfakes. The employee was instructed to authorise a series of international wire transfers totalling $25 million before anyone else in the organisation was informed. The funds were dispersed across multiple jurisdictions within hours of the call ending.

Deloitte’s research projects that deepfake-enabled banking fraud could reach $40 billion annually by 2027 if current trend lines continue.

Primary source: Deloitte Insights ↗
Case 03 — Synthetic identity at scale

The People Who Never Existed

A survey of 500 fraud and risk professionals commissioned by AI security firm Deduce and reviewed first by ABC News found that financial institutions are now routinely encountering “customers” who are entirely fabricated. Generative AI creates a plausible identity from a combination of real stolen data and manufactured biographical detail — photographs, purchase histories, social media footprints, even employment records — that passes standard verification checks. These synthetic people take out loans, establish credit lines, and disappear.

One fraud specialist described it as “identity theft from people who don’t exist” — a characterisation that captures both the novelty and the difficulty of prosecution.

Primary source: ABC News ↗

How We Got Here: A Brief Evidence Timeline

  • 2020–2021

    Deepfake technology becomes commercially accessible

    What required a research lab in 2018 is available as consumer software by 2021. Voice cloning and face-swapping tools proliferate on open-source repositories. Early documented cases of audio deepfake fraud targeting CEOs emerge in Europe.

  • 2022

    FBI begins tracking AI-assisted investment fraud

    The FBI’s Internet Crime Complaint Center (IC3) documents a surge in cryptocurrency investment fraud schemes using chatbot personas. The pattern — extended trust-building followed by a single devastating request — becomes known as “pig butchering.”

  • 2023

    Large language models enter the fraudster toolkit

    The release of capable public LLMs lowers the language barrier for international fraud operations. Phishing messages previously identifiable by poor grammar become indistinguishable from legitimate correspondence. Thomson Reuters flags synthetic identity fraud as one of the fastest-growing financial crimes.

  • 2024

    Regulatory bodies begin formal documentation

    The European Banking Authority and FBI both issue formal guidance on AI-assisted financial fraud within the same 12-month window. Feedzai’s research estimates over 50% of fraud now involves AI in some form. The FBI issues a public service announcement specifically about AI crypto fraud in December.

  • 2025–2026

    $20 billion year — IRS investigators sound public alarms

    FBI data places 2025 cyber theft losses at $20 billion — a single-year record. IRS criminal investigators, speaking publicly to CBS News in April 2026, describe being overwhelmed by the volume of AI-assisted cryptocurrency fraud cases. Deloitte projects $40 billion in annual deepfake banking losses by 2027.

Why AI Fraud Works Where Old Fraud Failed

The mechanics of manipulation predate AI — but AI removes the bottlenecks of scale and skill. — Unsplash

The psychology of fraud has not changed. What has changed is the removal of every bottleneck that once limited its reach. Traditional fraud required skilled operators who could sustain a convincing persona in real time, often across language and cultural barriers. It required one-to-one effort. It required plausible materials — letterheads, websites, voices — that took time and money to produce.

AI eliminates all of these constraints simultaneously. A single operator with access to a large language model, a voice cloning API, and a deepfake video tool can now sustain dozens of simultaneous “relationships” with potential victims, each personalised to the individual’s publicly available data, each maintained around the clock, each calibrated to recognise and respond to emotional states.

Mastercard’s cyber intelligence chief described the resulting challenge to banks with unusual candour: customers who have been psychologically manipulated “pass all the required checks and send the money themselves.” The fraud bypasses security systems entirely because the victim is the one authorising the transaction.

“Generative AI is the use of artificial intelligence tools capable of producing content — text, images, audio, video and data — with simple prompts. It allows fraudsters to send phishing messages faster, create more convincing fake identities, and impersonate real people at scale.”

Ari Jacoby, Deduce / ABC News investigation, 2023 — now standard practice in 2026

How to Identify an AI-Assisted Scam

Because AI fraud succeeds by removing traditional red flags — grammatical errors, cultural incongruities, implausible claims — the detection burden has shifted to behavioural and structural patterns. These are the signals that appear consistently across documented cases.

Eight evidence-based red flags — from primary law enforcement and financial security sources
1
Unsolicited contact that quickly becomes personal Strangers who reach out via social media, messaging apps, or dating platforms and rapidly escalate to intimate or trusting conversation. AI personas are optimised for speed-of-trust.
2
Investment opportunities introduced by a new contact Any financial opportunity — especially cryptocurrency — introduced by someone you have only met online, however long the relationship has felt. “Pig butchering” schemes are built on exactly this sequence.
3
Pressure to move funds quickly and without telling others Urgency and secrecy are structural requirements of fraud. Legitimate financial advisers, banks, and employers have no reason to ask for either.
4
Video calls where the other person looks slightly “off” Deepfake video artifacts include unnatural blinking patterns, facial boundary blurring, lighting inconsistency, and audio that doesn’t quite sync. If anything seems wrong, terminate and verify through a separate, known contact method.
5
An investment platform you cannot independently verify AI-generated investment websites can look identical to legitimate ones. Check with your national financial regulator’s official register — in the US, that is the SEC’s Investment Adviser Public Disclosure database.
6
Wire transfers or cryptocurrency as the only accepted payment These payment methods are irreversible. Legitimate investments in licensed instruments are always reachable by regulated payment channels. Irreversibility is a fraud design feature, not an inconvenience.
7
Emails personalised with specific personal details you didn’t share AI-generated phishing messages reference your employer, your children, your recent purchases, or your neighborhood — data scraped from public sources. Specificity is now a warning sign rather than a reassurance.
8
An “executive” who contacts you outside normal channels for an urgent financial request Deepfake CEO/CFO fraud typically arrives via WhatsApp, personal email, or unexpected video call rather than official corporate channels. Verify any financial instruction from a superior through a known, official telephone number before acting.

What Institutions Are (and Are Not) Doing

The regulatory response has been notable more for the urgency of its language than the speed of its implementation. The European Banking Authority published a detailed analysis of AI-enabled online financial fraud in December 2025. The FBI has issued multiple public service announcements. Deloitte has briefed banks at senior level about the deepfake threat.

Financial institutions themselves are deploying AI-based fraud detection — an arms race dynamic that most security professionals describe as currently favouring the attackers. Mastercard has stated publicly that it is using AI to trace money movement patterns. Several major banks have introduced additional friction for large transfers, particularly to cryptocurrency exchanges.

But consumer advocates and security researchers consistently note that the burden of protection continues to fall on individuals. Banks largely do not reimburse customers who were manipulated into authorising their own transactions, even when AI was demonstrably involved. The legal framework has not kept pace with the technology.

Banks deploy AI to detect AI-generated fraud. Most security researchers describe this arms race as currently favouring the attackers. — Unsplash

Stay ahead of financial deception

New investigations go to the email list first. No digests, no marketing. Just primary-source reporting when it’s ready.

Get new investigations by email

Primary sources — all claims in this article trace to the documents below

  1. Feedzai (2024): More Than 50% of Fraud Involves the Use of Artificial Intelligencefeedzai.com
  2. FBI Internet Crime Complaint Center, PSA #241203, December 2024 — ic3.gov
  3. PwC UK: Impact of AI on Fraud and Scamspwc.co.uk
  4. European Banking Authority: AI and Online Financial Fraud, December 2025 — eba.europa.eu
  5. HBK CPAs: How Artificial Intelligence Is Making Financial Fraud More Convincing Than Everhbkcpa.com
  6. Kaspersky: AI Phishing and Scamskaspersky.com
  7. Forbright Bank: Protect Yourself from AI Fraudforbrightbank.com
  8. CBS News (Anna Schecter, April 23, 2026): AI Is Fueling a Massive Surge in Crypto Fraud Schemes, IRS Investigators Saycbsnews.com
  9. LexisNexis Risk Solutions: AI and Online Fraudrisk.lexisnexis.com
  10. ABC News (Quinn Owen, October 2023): How AI Can Fuel Financial Scams Online, According to Industry Expertsabcnews.go.com
  11. ThreatMark: The Dark Side of Artificial Intelligencethreatmark.com
  12. Deloitte Insights: Deepfake Banking and AI Fraud Risk on the Risedeloitte.com
  13. Payments Dive: AI Drives Global Fraud Surgepaymentsdive.com
Exit mobile version