Deepfake Fraud in Companies: When the CEO on the Phone Isn’t the CEO
In February 2024, a financial officer in Hong Kong transferred $25 million – after a video call with the supposed CFO. All participants in the call were deepfakes. This case marks a turning point: social engineering is no longer limited to email and phone. AI makes real-time identity fraud possible.
TL;DR
- 25 million dollar deepfake fraud in Hong Kong (February 2024)
- Voice cloning requires only 3 seconds of audio material (Microsoft VALL-E)
- Human deepfake detection: less than 50 percent accuracy
- CEO fraud damages in 2023: $2.7 billion worldwide (FBI IC3)
From Phishing Email to Deepfake Video Call
Classic CEO fraud works via email: “Transfer €200,000 to this account immediately – confidentially.” As awareness has grown, success rates have dropped. Deepfakes raise the stakes: when the CEO personally calls – via video or phone – suspicion plummets.
The technology is alarmingly accessible. Voice cloning tools like ElevenLabs or Resemble AI produce convincing voice replicas from just a few seconds of audio. Real-time video deepfakes are more complex, but for high-value targets, the investment pays off.
The Anatomy of a Deepfake Attack
The Hong Kong case followed a familiar pattern: first, a phishing email as reconnaissance, then an invitation to an urgent video call. In that call, multiple participants appeared – all AI-generated. The familiar setting (Teams or Zoom), recognizable faces, and group dynamics erased any lingering doubt.
Attackers gather training data from publicly available sources: LinkedIn profiles, YouTube interviews, podcast appearances, and corporate websites – all feeding the models that replicate voice and appearance.
Why Technical Detection (Still) Isn’t Enough
Deepfake detection is an arms race. Today’s detectors scan for telltale artifacts: unnatural blinking, inconsistent lighting, or audio-video desynchronization. Yet each new generation of AI models erases those flaws. Human detection rates hover below 50 percent – little better than chance.
Technical safeguards – like content provenance, the C2PA standard, or digital watermarks – are still in development and far from universal adoption. For now, human-driven processes remain our strongest line of defense.
Countermeasures: Processes Over Technology
The most effective defense? Enforce the four-eyes principle for all financial transactions above a defined threshold; require callback via an independent channel (“I’ll call you back on your office number”); pre-agree on code words for sensitive instructions; and uphold one clear rule: no video call or phone conversation alone authorizes a payment.
These measures cost nothing and can be rolled out immediately. They’re also the only defense guaranteed to hold up against future, even more sophisticated deepfakes – because they verify identity through a second, independent channel.
Key Facts
Hong Kong Case: $25 million in losses from a deepfake video call (February 2024)
Total CEO Fraud: $2.7 billion in global losses in 2023 (FBI IC3 Report)
Voice Cloning: Just 3 seconds of audio are enough for a convincing voice replica (VALL-E)
Frequently Asked Questions
Can I detect deepfakes?
It’s extremely difficult. Watch for unnatural lip movements, erratic lighting shifts, absence of subtle facial micro-movements, or slight audio lag. But never rely solely on visual or auditory cues – always confirm through a second, independent channel.
Are we, as a midsize company, at risk?
Yes. Voice cloning demands minimal effort and resources. Even without video deepfakes, a convincing call “from the CEO” to finance or accounting is often enough to trigger action. The barrier to entry keeps falling – and procedural safeguards matter just as much for small and midsize firms as they do for enterprises.
Does cyber insurance cover deepfake fraud?
It depends on your policy. Many cyber insurance plans cover social engineering losses – but often with sublimits (typically €250,000-€500,000). CEO fraud may fall under crime or fidelity policies instead. Scrutinize your contract for explicit coverage of “identity fraud” or “social engineering.”
Related Articles
- Cybersecurity Trends 2026: The 7 Developments Security Decision-Makers Need to Know
- Hybrid Warfare and Disinformation: The Underestimated Cyber Threat to Companies
- Palantir and the Future of Cyber Defense: AI as a Strategic Weapon
More from the MBF Media Network
- Cloud Magazine – Cloud, SaaS & IT Infrastructure
- myBusinessFuture – Digitalization, AI & Business
- Digital Chiefs – C-Level Thought Leadership
Header Image Source: Pexels / Markus Winkler