The identity verification crisis: $12.5 billion in losses and growing
FBI, Verizon, IBM, FTC, and independent research data paint an unambiguous picture: identity verification is failing at every level — consumer, enterprise, and institutional. The attacks are growing faster than the defenses. Here is the data.
The numbers behind the crisis
Each data point below is sourced from a named, publicly available report. These are not projections — they are documented outcomes from the last 12–24 months.
FBI IC3: $12.5 billion in reported losses (2023)
The FBI's Internet Crime Complaint Center received 880,418 complaints in 2023, with losses exceeding $12.5 billion — a 22% increase from 2022. Business email compromise accounted for $2.9 billion alone. These figures represent only reported incidents; actual losses are estimated to be multiples higher.
Source: FBI IC3 Annual Report 2023
Verizon DBIR: 68% of breaches involve humans
The 2024 Verizon Data Breach Investigations Report analyzed over 30,000 real-world security incidents. Their finding: 68% of breaches involved a non-malicious human action — someone clicking a phishing link, responding to a social engineering message, or making an error. The human layer is not a secondary risk — it is the primary attack surface.
Source: Verizon DBIR 2024
IBM: $4.88M average breach cost, 292-day lifecycle
IBM's 2024 Cost of a Data Breach Report found the global average cost reached $4.88 million — the highest ever recorded. Organizations take an average of 292 days to identify and contain a breach. Breaches involving social engineering and compromised credentials consistently rank among the most expensive and slowest to detect.
Source: IBM Cost of a Data Breach Report 2024
FTC: $10 billion in consumer fraud losses (2023)
The Federal Trade Commission reported that Americans lost more than $10 billion to fraud in 2023 — a 14% increase from 2022 and the first time losses exceeded the $10 billion mark. Impersonation scams were the leading category, accounting for $2.7 billion. Investment fraud was the most costly individual category at $4.6 billion, much of it driven by identity deception.
Source: FTC Consumer Sentinel Network 2023
SlashNext: 1,265% increase in phishing post-ChatGPT
SlashNext's State of Phishing 2024 report documented a 1,265% increase in malicious phishing emails since Q4 2022, coinciding with the public release of ChatGPT. Generative AI has eliminated the traditional signals of phishing — grammatical errors, generic language, obvious formatting issues — making AI-generated social engineering indistinguishable from legitimate communication at scale.
Source: SlashNext State of Phishing 2024
Sumsub & Regula: Deepfake fraud at industrial scale
Sumsub's 2023 Identity Fraud Report documented a 10x increase in deepfake fraud attempts from 2022 to 2023. Separately, Regula's 2024 survey found that 77% of organizations worldwide have already encountered deepfake fraud attempts. McAfee's consumer survey found that 25% of adults have personally experienced or know someone who has experienced an AI voice cloning scam.
Source: Sumsub 2023 / Regula 2024 / McAfee 2023
Why every current solution misses the point
The tools we rely on were designed to protect access to systems. The attacks that cost the most money and do the most damage target trust between people. This is a category mismatch — and no amount of investment in the current paradigm can close it.
Passwords authenticate secrets, not people
A password proves that someone knows a string of characters. It cannot distinguish between the account holder, an attacker who purchased the credentials on the dark web, or an AI agent that extracted the password from a phishing site. The Verizon DBIR consistently finds that over 40% of breaches involve stolen credentials.
MFA authenticates devices, not people
MFA proves that someone possesses a registered device or has access to a phone number. It does not prove who is holding the device. SIM-swapping, MFA fatigue attacks, and real-time phishing proxies (EvilGinx, Modlishka) bypass MFA by capturing session tokens after the user completes authentication.
Biometrics authenticate bodies, not intent
Biometric systems verify physical characteristics. They cannot operate over a phone call or text message. On video calls, real-time deepfakes can now defeat face-based verification. And biometrics have a unique catastrophic risk: unlike passwords, a compromised biometric cannot be rotated.
Platform verification authenticates accounts, not humans
A blue checkmark or verified badge proves that someone once controlled an account. It does not prove they currently control it, that the account hasn't been compromised, or that the person on the other end of a call or message is the account holder. The Twitter/X verification changes of 2023 made this limitation explicit.
The missing layer isn't another factor.
It's human-to-human verification.
Every data point in this analysis traces back to the same structural gap: no authentication system verifies the identity of a person to another person. Passwords verify you to a system. MFA verifies your device to a system. Biometrics verify your body to a system. None of them answer the question that matters in the highest-cost attacks: “Is the person I'm talking to actually who they claim to be?”
Peer-to-peer authentication is the cryptographic model that closes this gap. It provides a direct, device-resident, zero-server verification layer between two human beings — provably secure against every known form of identity synthesis, including AI voice cloning, real-time deepfake video, and compromised-account social engineering.
Learn about Peer-to-Peer AuthenticationThe escalation timeline
How we got here — and why the trajectory is accelerating.
First AI voice CEO fraud
Criminals use AI voice synthesis to impersonate a CEO, convincing a subordinate to transfer €220,000. Reported by the Wall Street Journal. The era of AI-powered identity fraud begins.
SIM-swapping becomes mainstream
Law enforcement agencies globally report a surge in SIM-swapping attacks that bypass SMS-based MFA. The FBI IC3 documents $68 million in SIM-swapping losses.
MFA bypass tools go open-source
Real-time phishing proxy tools like EvilGinx2 become freely available, enabling attackers to capture MFA session tokens at scale. High-profile breaches at Uber and Twilio demonstrate the technique.
Generative AI transforms phishing
ChatGPT's release catalyzes a 1,265% increase in phishing emails (SlashNext). AI-generated messages are grammatically perfect and contextually personalized. Traditional detection signals collapse.
Deepfake video fraud at $25M scale
A Hong Kong multinational loses $25 million after an employee is deceived by a video call with deepfake versions of multiple colleagues, including the CFO. Reported by Reuters and confirmed by Hong Kong Police.
The verification gap is undeniable
Cumulative evidence from FBI, Verizon, IBM, FTC, and academic research converges on one conclusion: the most expensive and fastest-growing attacks exploit human trust — and no authentication system addresses it. Peer-to-peer authentication emerges as the response.
Frequently asked questions
How much money is lost to identity fraud each year?
The FBI IC3 reported $12.5 billion in total cybercrime losses in 2023, with business email compromise accounting for $2.9 billion. The FTC reported $10 billion in consumer fraud losses the same year, with impersonation scams as the leading category. IBM estimates the average data breach costs $4.88 million globally. Total economic impact including unreported incidents is estimated to be significantly higher.
Why is MFA not solving the identity crisis?
MFA authenticates access to accounts and devices. It does not authenticate the identity of a person in a conversation. The most costly attacks — BEC, deepfake video calls, vishing — exploit trust between people, not system access. MFA has no mechanism to verify whether a caller, emailer, or video participant is who they claim to be.
How fast is deepfake fraud growing?
Sumsub documented a 10x increase in deepfake fraud attempts between 2022 and 2023. Regula's 2024 survey found 77% of companies have already encountered deepfake fraud attempts. AI voice cloning now requires as little as 3 seconds of reference audio. The cost of producing a convincing deepfake has dropped below $100 in compute costs.
What is the solution to the identity verification crisis?
The structural gap is that no authentication system verifies the identity of one person to another person. Peer-to-peer authentication fills this gap using cryptographic TOTP codes generated from shared secrets stored on physical devices. Unlike passwords, MFA, or biometrics, P2P auth works across every communication channel and cannot be defeated by AI synthesis.
Sources & Citations
- 1.FBI Internet Crime Complaint Center (IC3) Annual Report 2023— $12.5B losses, BEC $2.9B
- 2.Verizon 2024 Data Breach Investigations Report— 68% human element, 30,000+ incidents analyzed
- 3.IBM Cost of a Data Breach Report 2024— $4.88M average, 292-day lifecycle
- 4.FTC Consumer Sentinel Network Data Book 2023— $10B consumer fraud losses
- 5.SlashNext State of Phishing Report 2024— 1,265% phishing increase post-ChatGPT
- 6.Sumsub Identity Fraud Report 2023— 10x deepfake increase
- 7.Regula Deepfake Trends 2024— 77% of companies face deepfake attempts
- 8.McAfee Beware the Artificial Impostor 2023— 25% adults targeted by AI voice cloning
- 9.Proofpoint Human Factor Report 2024— 99% of threats require human interaction
- 10.CrowdStrike Global Threat Report 2024— 62-minute average breakout time
- 11.AARP Fraud Survey 2023— 1 in 4 adults know a voice scam victim
- 12.Microsoft Digital Defense Report 2024— 99.9% compromised accounts lack MFA
All statistics are sourced from publicly available reports. Real Authenticator is not affiliated with any cited organization.
Continue reading
Know who you're really
talking to
In a world of deepfakes and impersonation, Real Authenticator gives you and your trusted contacts a private, unforgeable way to verify identity. Download today — it's free.
Download on App StoreFree to download · No credit card required · Privacy-first design