1. March 2026 | Print article |

Hybrid Warfare and Disinformation: The Underestimated Cyber Threat to Businesses

2 min Reading Time

Deepfakes, AI-generated disinformation and targeted reputation attacks: Hybrid threats blur the lines between cyberattacks, information warfare and economic crime in 2026. Why every business is affected – and what CISOs can do about it.

TL;DR

  • Targeted disinformation campaigns against companies are rising sharply in 2025/2026 – as a standalone attack vector or as a complement to traditional cyberattacks
  • Real-time deepfakes in video conferences enable CEO fraud at a new level – one financial services provider lost the equivalent of 25 million Euro in 2024
  • Traditional IT security protects systems and data, not reputation and trust – the “reputational security gap” remains unguarded for most companies
  • Countermeasures require cross-functional collaboration between IT security, communications, legal, and executive leadership
+38 %
Increase in state-sponsored cyberattacks since 2022
Source: Microsoft Digital Defense Report, 2024

Disinformation as an Attack Vector

The idea that disinformation only affects governments and elections is dangerously outdated. In 2025 and 2026, a clear trend emerges: targeted disinformation campaigns against businesses are on the rise – as either a standalone threat or an accompaniment to conventional cyberattacks.

The scenarios are real: a deepfake video of the CEO announces a profit warning. AI-generated whistleblower reports about alleged data breaches go viral on social media. Fake reviews and customer testimonials erode trust in a product.

“Hybrid threats blur the boundaries between war and peace. Companies must understand that they are already part of the battlefield.”ENISA Threat Landscape, 2024

The Anatomy of Hybrid Attacks

Hybrid warfare combines multiple attack methods into a coordinated campaign:

  1. Phase 1 – Reconnaissance: Social media profiles of executives are analyzed, voice samples and video material collected, and weaknesses in corporate communications identified
  2. Phase 2 – Preparation: Deepfakes are created, fake accounts established on relevant forums and LinkedIn, and insider information either compromised or fabricated
  3. Phase 3 – Attack: Simultaneous execution of a technical cyberattack (e.g., ransomware) and a disinformation campaign. While the IT team deals with the technical incident, the reputational crisis escalates in the media
  4. Phase 4 – Amplification: AI-driven bot networks spread disinformation, and algorithmic amplification on social media ensures widespread reach

Fact: According to Europol, the number of AI-generated deepfake videos rose by 550 percent in 2025 compared to the previous year.

Why Traditional Security Measures Fall Short

Conventional IT security protects systems and data – not reputation or trust. Firewalls, EDR, and SIEM offer no defense against a viral deepfake tweet. This reputational security gap remains unaddressed in most organizations.

The challenge: defending against disinformation requires collaboration between IT security, corporate communications, legal, and executive leadership. Few companies have established processes for this.

Deepfakes: Quality Has Become the Problem

By 2025/2026, the technical quality of deepfakes has reached a point where they are nearly undetectable to the human eye. Real-time deepfakes in video conferences – CEO fraud 2.0 – are now documented:

  • A financial services provider in Hong Kong lost the equivalent of 25 million Euro in 2024 via a deepfake video call in which fraudsters impersonated the CFO
  • In Germany, several cases emerged in 2025 where deepfake voices of managing directors were used to authorize fraudulent wire transfers
  • Current deepfake detection tools have an accuracy rate of only 70 to 85 percent – too low for reliable automated protection

Fact: According to the FBI, the average loss from deepfake-assisted CEO fraud reached 4.7 million Euro per incident in 2025.

Countermeasures for Businesses

Six concrete steps every company should implement:

  1. Deepfake awareness training: Sensitize employees in key roles (finance, executive assistants) to deepfake risks
  2. Verification protocols: Introduce multi-channel verification for critical decisions (wire transfers, personnel actions, press releases) – never act solely on a video call or voice message
  3. Media monitoring: Automate monitoring of social media, news outlets, and dark web forums for brand mentions and potential disinformation campaigns
  4. Content authenticity: Implement C2PA standards (Coalition for Content Provenance and Authenticity) for official corporate communications
  5. Crisis communication plan: Prepare statements and response procedures for disinformation attacks – speed is critical
  6. Cross-functional incident response: Expand the IR team to include communications and legal, with specific playbooks for hybrid attacks

Conclusion

Hybrid threats blur the lines between cyberattacks, information warfare, and economic crime. For CISOs, this means expanding their responsibility: beyond systems and data, they must now also protect reputation and trust. Anyone dismissing disinformation as “not my problem” becomes an easy target.

Key Facts

Phishing volume: Over 3.4 billion phishing emails are sent globally every day.

Reporting rate: Only 3 percent of employees report suspicious emails to the IT department.

Frequently Asked Questions

How can I spot a deepfake during a video conference?

Look for subtle artifacts: unnatural lip sync, odd lighting shifts, missing micro-expressions, and delays during quick head movements. However, prevention is more important than detection: establish verification protocols for critical decisions – a callback via a known number or confirmation through a separate channel.

Are hybrid attacks only a risk for large enterprises?

No. Mid-sized companies are especially vulnerable, as they often lack dedicated communications teams capable of responding quickly to disinformation. Moreover, they are targeted as suppliers or partners to indirectly harm larger organizations.

What role does AI play in defending against disinformation?

AI-powered tools can automate social media monitoring, detect bot networks, and identify deepfakes with 70-85 percent accuracy. However, the technology is not yet reliable enough for fully automated defense. The most effective current approach combines AI monitoring with human judgment.

More from the MBF Media Network

Header Image Source: Hartono Creative Studio / Pexels

Tobias Massow

About the author: Tobias Massow

More articles by

Also available in

FrançaisEspañolDeutsch

Read article

A magazine by Evernine Media GmbH