
Deepfake video impersonation has crossed from theoretical risk to documented loss. The 2024 Arup attack cost $25 million after a finance employee was deceived by a live AI-generated video call. Here's what changed, what's at stake, and how to build your defenses.
Key Takeaways
- AI-generated deepfakes can now synthesize the voice and face of a CFO in real time, from as little as 3 seconds of audio
- The Arup attack (early 2024) resulted in $25 million USD in losses after a finance employee was deceived by a live deepfake video call
- 44% of financial professionals have already reported encountering deepfake-driven fraud
- Deepfake video impersonation bypasses both technical controls and the human instinct to "verify by sight"
- Mitigation requires layered verification protocols, continuous staff training, and mandatory "verify first" procedures
When Seeing Is No Longer Believing
For decades, security awareness training has leaned on a simple heuristic: if you can see and hear someone you know, the interaction is legitimate. That assumption is no longer safe. Deepfake video impersonation has reached the point where attackers can synthesize the voice and face of a senior executive in real time, conduct a convincing video call, and pressure employees into authorizing fraudulent transactions, all without setting off a single technical alarm.
For financial institutions, where wire transfers are authorized daily and trust in leadership is a core operating assumption, this represents a new class of threat. Not a vulnerability in your perimeter. A vulnerability in your people, and in the very identity verification mechanisms your institution relies on.
How Deepfake Video Impersonation Works
Classic vishing relied on impersonation over voice calls, with attackers using social engineering skills and, occasionally, voice changers. The success of those attacks depended entirely on the individual attacker's persuasiveness, and they often fell apart under probing questions.
Today's deepfake tools can replicate a specific person's voice with 85% accuracy from a 3-second audio sample. When combined with real-time video synthesis, feeding off publicly available footage of executives on earnings calls, conference recordings, or LinkedIn interviews, the result is a convincing, interactive video call where the attacker appears to be someone they are not.
These tools are no longer the preserve of state-sponsored threat actors. As generative models become more accessible and cybercrime kits proliferate, the barrier to entry has collapsed. The question CISOs in the financial sector must ask is no longer whether their organization will be targeted, but when, and whether their workforce is equipped to detect it.
The Arup Attack: A Turning Point for the Industry
In early 2024, a finance worker at Arup's Hong Kong office was deceived into executing 15 separate wire transfers totaling $25 million USD, all following a video call with what appeared to be the company's UK-based CFO and several colleagues.
The attack began with a phishing email, which the employee initially dismissed as suspicious. The attackers escalated by inviting the victim to a video call. Using high-fidelity deepfakes, they simulated the CFO and other colleagues in real time. The visual and auditory realism of the "executives", combined with the pretextual framing of a confidential financial operation, overrode the employee's initial skepticism entirely. This was not a failure of technology. Firewalls, email filters, and endpoint controls all functioned as intended. It was a failure of the trust model itself.
The attack combined two well-documented social engineering techniques: pretexting (framing the transfers as part of a secret, high-stakes operation) and authority exploitation via deepfake (using the visual credibility of senior leadership to enforce compliance through perceived hierarchical pressure).
The $25 million loss was the immediate damage. The broader implication, confirmed as a wake-up call for CISOs globally, is that visual identity verification is no longer a reliable security heuristic.
The Broader Trend: Deepfake Fraud Is Scaling
The Arup case was not an isolated incident. It sits within a rapidly accelerating trend.
- Deepfake activity increased an estimated +162% in 2025, driven by accessible generative AI tools (Pindrop, 2025 Voice Intelligence & Security Report)
- Over $200 million in financial losses were attributed to deepfake fraud in Q1 2025 alone (Resemble AI)
- 44% of financial professionals have already reported encountering deepfake-driven fraud
- The insurance sector saw a 475% rise in synthetic voice fraud (ENISA Threat Landscape 2025)
Beyond individual firms, deepfake impersonation now extends to investor-facing fraud: ongoing 2025/2026 campaigns have circulated AI-generated videos of senior Goldman Sachs executives endorsing fraudulent investment schemes on social media, eroding institutional trust without touching the firm's internal systems at all.
By 2026, AI agents are expected to leverage deepfakes to pose as job candidates during hiring processes, creating a new vector for organizational infiltration.
Why Traditional Defenses Fall Short
Deepfake video impersonation is particularly difficult to defend against for three reasons:
1. It bypasses technical controls entirely. There is no malicious attachment, no suspicious URL, no anomalous network traffic. The attack vector is a video call.
2. It exploits the most trusted verification mechanism humans have. Identity verification in real-world interactions relies on visual and auditory recognition. Deepfakes weaponize both at scale.
3. It operates through legitimate channels. Video conferencing platforms, phone calls, voicemail; all standard workplace communication tools. Behavioral controls struggle to flag what looks like normal activity.
What Financial Institutions Must Do
Mitigation requires a combination of procedural controls, technical layering, and, critically, staff preparedness.
Establish mandatory out-of-band verification for high-value actions. Any request to authorize a wire transfer, reset credentials, or share sensitive data should require a secondary confirmation through a pre-established, independent channel, regardless of how convincing the initial interaction appears.
Train staff to recognize deepfake indicators. Unusual lighting, slight audio lag, unnatural blinking, and inconsistent lip-sync are current tells, but models are improving rapidly. Training must be updated continuously, not annually.
Reinforce "verify first" as a non-negotiable reflex. The instinct to comply when a CFO is on a video call asking for urgent action is deeply human. Security culture must actively counter that instinct with procedural habit. Employees who pause to verify should be recognized, not viewed as obstructing operations.
Audit your voice and video verification protocols against the AI threat. Voice approval for wire transfers, video-based KYC, and similar processes need explicit review in light of deepfake capabilities. Many organizations have yet to modernize these controls against generative AI threats.
Key Questions for CISOs
- Have you audited your verification processes (e.g., voice or video approval for wire transfers) against the risk of AI-cloned voices and deepfake video impersonation?
- Does your current security awareness training include live, realistic deepfake scenarios?
- Is "verify first" embedded as a cultural reflex across all departments authorized to handle financial transactions?
- Are your incident response protocols updated to address the specific characteristics of deepfake-assisted social engineering?
Download the Full Report
This article draws from the 2026 Social Engineering Risk Report for Financial Services, produced by Arsen: a comprehensive guide to AI-enabled threat vectors, real-world attack case studies, regulatory implications, and a CISO-ready action checklist.
โ Download the full report to access the complete threat landscape analysis, the CISO checklist against AI social engineering, and detailed mitigation frameworks for the financial sector.
Sources: Pindrop 2025 Voice Intelligence & Security Report; Resemble AI; ENISA Threat Landscape 2025; 2026 Social Engineering Risk Report for Financial Services, Arsen.