A retired schoolteacher in Ohio. A small business owner in Texas. A nurse saving for her daughter’s college fees. In 2025 alone, fraudsters using artificial intelligence stripped an estimated $20 billion from Americans — more than half of it in cryptocurrency. The FBI is sounding alarms. IRS criminal investigators are overwhelmed. And the technology making it possible is the same AI touted daily as a wonder of modern progress. This is what the evidence actually shows.
The Industrial Scale of the Problem
Kyle Holder’s story is not unusual — it is now routine. The Ohio woman lost her entire $300,000 life savings to a fraudster who used AI to establish months of convincing, emotionally calibrated contact before directing her funds into untraceable cryptocurrency wallets. The scheme took less than three months. Recovery was zero.
CBS News reported in April 2026 that IRS criminal investigators describe the volume of AI-assisted crypto fraud as “massive” and accelerating. The FBI’s 2025 figure of $20 billion in cyber theft — more than half of it cryptocurrency-linked — represents a single-year record and a roughly 30% increase on 2024 totals.
The scale matters because it changes the moral framing. This is no longer a story about unusually gullible people encountering unusually clever criminals. It is a story about an infrastructure — tooled, automated, and scalable — that can target thousands of potential victims simultaneously with personalised, plausible deception that no human criminal network could sustain.
“Banks have found these scams incredibly challenging to detect. Their customers pass all the required checks and send the money themselves — criminals haven’t needed to break any security measures.”
Ajay Bhalla, Cyber Intelligence Chief, Mastercard — via ABC NewsThe Four AI Weapons Now in Active Use
Fraud is not new. What is new is the leverage AI provides: criminals with modest technical skills can now automate, personalise, and scale attacks that previously required large, skilled teams. Four techniques have emerged as dominant.
Deepfake Video & Audio
AI generates real-time video or audio that impersonates executives, family members, or bank officials. Used in “vishing” calls and video-verified wire transfers. Deloitte tracked a 700% rise in deepfake banking incidents since 2022.
Synthetic Identity Fraud
Real Social Security numbers are combined with AI-generated biographical data — photos, browsing histories, credit trails — to create entirely fictitious people who then open bank accounts and take out loans. Thomson Reuters called it “one of the fastest-growing financial crimes.”
AI-Personalised Phishing
Large language models scrape social media to write phishing emails personalised to each target — referencing their employer, their children’s names, their recent purchases. Open rates and click-through rates are dramatically higher than mass phishing. Kaspersky documented a 60% increase in AI-generated phishing attempts in 2024.
AI Crypto Investment Fraud
Chatbots pose as financial advisers, building trust over weeks before steering victims toward fraudulent cryptocurrency platforms they control. The FBI issued a formal public service announcement in December 2024 specifically about this method.
AI tools have lowered the barrier to entry for financial crime. What once required a team now requires a subscription. — Unsplash
Three Cases That Illustrate the Method
The $300,000 Retirement Account
Kyle Holder of Ohio was contacted by someone she believed to be a romantic interest she had met online. Over approximately three months, an AI-assisted persona maintained consistent, affectionate, emotionally intelligent communication — responding at appropriate hours, remembering personal details, mirroring her interests. When the “relationship” reached a point of trust, her contact introduced a cryptocurrency investment opportunity. By the time she realised what had happened, her entire $300,000 in retirement savings was gone.
IRS criminal investigators told CBS News this pattern — known as “pig butchering” — now constitutes the largest single category of AI-assisted financial fraud by dollar value.
Primary source: CBS News, April 2026 ↗The CFO Who Was Never on the Call
In a case documented by Deloitte’s financial services research team, a finance employee at a multinational corporation received a video call from what appeared to be their CFO and several senior colleagues. All were deepfakes. The employee was instructed to authorise a series of international wire transfers totalling $25 million before anyone else in the organisation was informed. The funds were dispersed across multiple jurisdictions within hours of the call ending.
Deloitte’s research projects that deepfake-enabled banking fraud could reach $40 billion annually by 2027 if current trend lines continue.
Primary source: Deloitte Insights ↗The People Who Never Existed
A survey of 500 fraud and risk professionals commissioned by AI security firm Deduce and reviewed first by ABC News found that financial institutions are now routinely encountering “customers” who are entirely fabricated. Generative AI creates a plausible identity from a combination of real stolen data and manufactured biographical detail — photographs, purchase histories, social media footprints, even employment records — that passes standard verification checks. These synthetic people take out loans, establish credit lines, and disappear.
One fraud specialist described it as “identity theft from people who don’t exist” — a characterisation that captures both the novelty and the difficulty of prosecution.
Primary source: ABC News ↗How We Got Here: A Brief Evidence Timeline
-
2020–2021
Deepfake technology becomes commercially accessible
What required a research lab in 2018 is available as consumer software by 2021. Voice cloning and face-swapping tools proliferate on open-source repositories. Early documented cases of audio deepfake fraud targeting CEOs emerge in Europe.
-
2022
FBI begins tracking AI-assisted investment fraud
The FBI’s Internet Crime Complaint Center (IC3) documents a surge in cryptocurrency investment fraud schemes using chatbot personas. The pattern — extended trust-building followed by a single devastating request — becomes known as “pig butchering.”
-
2023
Large language models enter the fraudster toolkit
The release of capable public LLMs lowers the language barrier for international fraud operations. Phishing messages previously identifiable by poor grammar become indistinguishable from legitimate correspondence. Thomson Reuters flags synthetic identity fraud as one of the fastest-growing financial crimes.
-
2024
Regulatory bodies begin formal documentation
The European Banking Authority and FBI both issue formal guidance on AI-assisted financial fraud within the same 12-month window. Feedzai’s research estimates over 50% of fraud now involves AI in some form. The FBI issues a public service announcement specifically about AI crypto fraud in December.
-
2025–2026
$20 billion year — IRS investigators sound public alarms
FBI data places 2025 cyber theft losses at $20 billion — a single-year record. IRS criminal investigators, speaking publicly to CBS News in April 2026, describe being overwhelmed by the volume of AI-assisted cryptocurrency fraud cases. Deloitte projects $40 billion in annual deepfake banking losses by 2027.
Why AI Fraud Works Where Old Fraud Failed
The mechanics of manipulation predate AI — but AI removes the bottlenecks of scale and skill. — Unsplash
The psychology of fraud has not changed. What has changed is the removal of every bottleneck that once limited its reach. Traditional fraud required skilled operators who could sustain a convincing persona in real time, often across language and cultural barriers. It required one-to-one effort. It required plausible materials — letterheads, websites, voices — that took time and money to produce.
AI eliminates all of these constraints simultaneously. A single operator with access to a large language model, a voice cloning API, and a deepfake video tool can now sustain dozens of simultaneous “relationships” with potential victims, each personalised to the individual’s publicly available data, each maintained around the clock, each calibrated to recognise and respond to emotional states.
Mastercard’s cyber intelligence chief described the resulting challenge to banks with unusual candour: customers who have been psychologically manipulated “pass all the required checks and send the money themselves.” The fraud bypasses security systems entirely because the victim is the one authorising the transaction.
“Generative AI is the use of artificial intelligence tools capable of producing content — text, images, audio, video and data — with simple prompts. It allows fraudsters to send phishing messages faster, create more convincing fake identities, and impersonate real people at scale.”
Ari Jacoby, Deduce / ABC News investigation, 2023 — now standard practice in 2026Verified warning from law enforcement
The FBI’s Internet Crime Complaint Center issued PSA #241203 in December 2024 specifically warning that criminals are using AI-generated content — including fake investment websites, AI chatbot advisers, and synthetic news articles — to add legitimacy to cryptocurrency investment frauds. The PSA includes specific guidance for anyone who believes they may already be a victim. Read the original FBI PSA ↗
How to Identify an AI-Assisted Scam
Because AI fraud succeeds by removing traditional red flags — grammatical errors, cultural incongruities, implausible claims — the detection burden has shifted to behavioural and structural patterns. These are the signals that appear consistently across documented cases.
What Institutions Are (and Are Not) Doing
The regulatory response has been notable more for the urgency of its language than the speed of its implementation. The European Banking Authority published a detailed analysis of AI-enabled online financial fraud in December 2025. The FBI has issued multiple public service announcements. Deloitte has briefed banks at senior level about the deepfake threat.
Financial institutions themselves are deploying AI-based fraud detection — an arms race dynamic that most security professionals describe as currently favouring the attackers. Mastercard has stated publicly that it is using AI to trace money movement patterns. Several major banks have introduced additional friction for large transfers, particularly to cryptocurrency exchanges.
But consumer advocates and security researchers consistently note that the burden of protection continues to fall on individuals. Banks largely do not reimburse customers who were manipulated into authorising their own transactions, even when AI was demonstrably involved. The legal framework has not kept pace with the technology.
Banks deploy AI to detect AI-generated fraud. Most security researchers describe this arms race as currently favouring the attackers. — Unsplash
The claim that AI is now a primary driver of financial crime at historic scale is confirmed by primary sources: FBI data, IRS criminal investigators speaking on record, peer-reviewed financial security research, regulatory guidance from the EBA, and documented individual cases. The $20 billion figure for 2025 is FBI-sourced. The 700% deepfake increase is from Deloitte’s financial services research. No speculation has been added. Every figure in this article links to the original institution that produced it.
Stay ahead of financial deception
New investigations go to the email list first. No digests, no marketing. Just primary-source reporting when it’s ready.
Get new investigations by emailPrimary sources — all claims in this article trace to the documents below
- Feedzai (2024): More Than 50% of Fraud Involves the Use of Artificial Intelligence — feedzai.com
- FBI Internet Crime Complaint Center, PSA #241203, December 2024 — ic3.gov
- PwC UK: Impact of AI on Fraud and Scams — pwc.co.uk
- European Banking Authority: AI and Online Financial Fraud, December 2025 — eba.europa.eu
- HBK CPAs: How Artificial Intelligence Is Making Financial Fraud More Convincing Than Ever — hbkcpa.com
- Kaspersky: AI Phishing and Scams — kaspersky.com
- Forbright Bank: Protect Yourself from AI Fraud — forbrightbank.com
- CBS News (Anna Schecter, April 23, 2026): AI Is Fueling a Massive Surge in Crypto Fraud Schemes, IRS Investigators Say — cbsnews.com
- LexisNexis Risk Solutions: AI and Online Fraud — risk.lexisnexis.com
- ABC News (Quinn Owen, October 2023): How AI Can Fuel Financial Scams Online, According to Industry Experts — abcnews.go.com
- ThreatMark: The Dark Side of Artificial Intelligence — threatmark.com
- Deloitte Insights: Deepfake Banking and AI Fraud Risk on the Rise — deloitte.com
- Payments Dive: AI Drives Global Fraud Surge — paymentsdive.com

