The story
Thornfield Capital is a mid-size private equity firm. The CEO, Richard Park, receives a call at 3:45 pm on a Tuesday from a number he recognizes as the company's general counsel, Patricia Walsh.
Patricia's voice is calm but clipped — the controlled urgency Richard has heard before when she's managing litigation exposure. She explains that a settlement in a pending employment matter has reached a critical juncture. Opposing counsel has demanded a wire by end of business today or the deal collapses. The amount is $340,000 to their law firm's escrow account. She's sending the wire instructions now.
Richard asks if legal has reviewed the counterparty's terms. Patricia confirms. He asks if this goes through the standard approval process. She explains that the confidentiality terms of the settlement require limiting knowledge to them specifically, at least until the paperwork is signed.
Richard approves the wire. The instructions arrive by email seconds later, as promised. The transfer is processed at 4:22 pm.
At 8:14 am the next morning, Patricia walks into Richard's office to discuss a different matter. Richard mentions approving the settlement wire. Patricia has no idea what he's talking about. She has been in depositions all afternoon, her phone off.
The attack used AI voice synthesis trained on Patricia's voice, sourced from a recorded bar association panel she participated in publicly. Her phone number was spoofed. The email with wire instructions was sent from a lookalike domain registered six weeks prior. The 'confidentiality' framing was specifically designed to prevent Richard from consulting anyone who might have raised questions.
What happened
$340,000 transferred to a fraudulent escrow. Legal recovery proceedings ongoing.
What stops it
A five-second code exchange confirms the actual general counsel — the synthesized voice cannot produce it.
What this scenario teaches us
Confidentiality instructions in a financial request are a social engineering tactic, not a legitimate business requirement. Real legal processes have documentation trails. Any request that asks you to bypass normal approvals should increase, not decrease, verification requirements.
AI voice synthesis can impersonate a specific person's voice using freely available public audio. Familiarity with a voice is no longer a reliable identity signal.
Time pressure and confidentiality are the two most reliable social engineering levers in executive impersonation attacks. Real Authenticator code verification neutralizes both — it takes five seconds and is never a violation of confidentiality.
Caller ID can be spoofed to display any number, including internal corporate numbers.
Prevention checklist
Require code verification for any financial approval initiated by phone call, regardless of caller
Establish a policy: confidentiality requests never override verification requirements
Enroll all C-suite and legal staff in Real Authenticator connections
Conduct executive awareness training specifically on voice deepfake scenarios
Frequently asked questions
Would a callback to the executive's known number help?
A callback to a known number is a reasonable additional layer. Note that attackers who know your verification practices may intercept or anticipate the callback. Real Authenticator's code verification is device-based and cannot be intercepted by an attacker who doesn't have physical access to the enrolled phone.
Sources & citations
- 1.
- 2.
Loss figures are based on documented cases or FBI IC3 reported averages. Individual scenario details are illustrative reconstructions based on documented attack patterns. Real Authenticator is not affiliated with any cited organization.