{"id":7736,"date":"2026-03-25T09:00:00","date_gmt":"2026-03-25T09:00:00","guid":{"rendered":"https:\/\/www.securitytoday.de\/2026\/04\/02\/post_id-5572\/"},"modified":"2026-04-10T08:20:28","modified_gmt":"2026-04-10T08:20:28","slug":"copilot-as-a-security-risk-when-the-ai-assistant-leaks-corporate-secrets","status":"publish","type":"post","link":"https:\/\/www.securitytoday.de\/en\/2026\/03\/25\/copilot-as-a-security-risk-when-the-ai-assistant-leaks-corporate-secrets\/","title":{"rendered":"Copilot as a Security Risk: When the AI Assistant Leaks Corporate Secrets"},"content":{"rendered":"<p style=\"display:inline-block;background:#69d8ed;color:#fff;padding:4px 14px;border-radius:20px;font-size:0.85em;margin-bottom:18px;\">8 min Reading Time<\/p>\n<p><strong>Microsoft 365 Copilot has a zero-click vulnerability with a CVSS score of 9.3. The European Data Protection Supervisor (EDPS) has reprimanded the EU Commission for its use of M365. And 34 percent of German employees use AI tools outside of corporate IT. Three facts, one problem: Companies are rolling out AI assistants without having built the security architecture for them.<\/strong><\/p>\n<p>Copilot is not a chatbot. It is a system that accesses all the data a user can see in SharePoint, OneDrive, Teams, and Outlook &#8211; and sometimes data they shouldn\u2019t see. For IT security teams, this means: Every permission gap, every forgotten &#8220;Anyone&#8221; sharing link, every orphaned workspace suddenly becomes searchable via an AI search engine. This article explains which <a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/24\/post_id-5563\/\">attack vectors have been documented<\/a>, what the BSI (Federal Office for Information Security) recommends, and which measures should be mandatory before a Copilot rollout.<\/p>\n<h2>TL;DR<\/h2>\n<ul>\n<li><strong>CVE-2025-32711 (EchoLeak):<\/strong> The first zero-click vulnerability in a production AI system. CVSS 9.3. Attackers could exfiltrate data from the Copilot context without user interaction (<em>Infosecurity Magazine<\/em>, May 2025).<\/li>\n<li><strong>EDPS reprimands EU Commission:<\/strong> In March 2024, the European Data Protection Supervisor determined that the EU Commission violated data protection laws when using M365 &#8211; due to insufficient specification of data collection and lack of transfer guarantees.<\/li>\n<li><strong>34 percent shadow AI in Germany:<\/strong> One in three employees uses generative AI with private accounts outside corporate IT (<em>Bitkom 2024<\/em>). Only 23 percent of companies have rules for this.<\/li>\n<li><strong>Oversharing is the biggest risk:<\/strong> Microsoft itself identifies excessive permissions as the most common risk category in Copilot deployments. Copilot makes existing permission errors immediately exploitable.<\/li>\n<li><strong>Gartner warns:<\/strong> By 2027, 40 percent of all AI data breaches will result from cross-border misuse of generative AI (<em>Gartner<\/em>, February 2025).<\/li>\n<\/ul>\n<h2>EchoLeak: The First Zero-Click AI Vulnerability<\/h2>\n<p>In January 2025, researchers from Aim Labs discovered a vulnerability in Microsoft 365 Copilot that opened a new category: <strong>CVE-2025-32711, dubbed &#8220;EchoLeak.&#8221;<\/strong> CVSS score: 9.3 out of 10. The unique aspect: No click, no file opening, no user interaction required. The attacker could exfiltrate data directly from another user\u2019s Copilot context.<\/p>\n<p>Microsoft patched the flaw in May 2025. But EchoLeak marks a turning point: It was the first documented case of a zero-click vulnerability in a production LLM system. For <a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/19\/post_id-5412\/\">security teams<\/a>, this means AI assistants don\u2019t just create new attack surfaces &#8211; they create attack surfaces that can be exploited without any victim involvement.<\/p>\n<div class=\"evm-stat evm-stat-highlight\" style=\"text-align:center;background:#f0f9fa;border-radius:12px;padding:32px 24px;margin:32px 0;\">\n<div style=\"font-size:48px;font-weight:700;color:#004a59;letter-spacing:-0.03em;\">CVSS 9.3<\/div>\n<div style=\"font-size:15px;color:#444;margin-top:8px;\">EchoLeak (CVE-2025-32711) &#8211; First zero-click vulnerability in a production AI system<\/div>\n<div style=\"font-size:12px;color:#888;margin-top:8px;\">Source: Infosecurity Magazine \/ HackTheBox, May 2025<\/div>\n<\/div>\n<h2>Prompt Injection: Four Documented Attack Vectors<\/h2>\n<p>EchoLeak isn\u2019t the only documented attack vector. Since 2024, security researchers have found multiple ways to manipulate Copilot &#8211; and Microsoft has had to patch each one.<\/p>\n<p><strong>ASCII Smuggling (Johann Rehberger, 2024):<\/strong> The researcher demonstrated how invisible Unicode characters could disguise sensitive data within seemingly harmless hyperlinks. A manipulated email or a crafted document is enough: Copilot executes an indirect prompt injection, collects data (including MFA codes), and hides it in a link. A single user click exfiltrates the data. Microsoft initially classified the finding as &#8220;Low Severity&#8221; &#8211; and only patched it after a public demonstration at the HITCON conference.<\/p>\n<p><strong>Mermaid Diagram Exfiltration (Truesec, September 2025):<\/strong> Copilot could be tricked via manipulated Office documents into embedding email content within interactive links inside Mermaid diagrams. Microsoft subsequently disabled interactive links in Mermaid diagrams entirely.<\/p>\n<p><strong>Confidential Label Bypass (January 2026):<\/strong> Copilot accessed protected emails in Sent Items and Drafts despite confidentiality labels being applied. DLP policies were bypassed. Microsoft released an emergency patch.<\/p>\n<p><strong>Indirect Prompt Injection via Email (Zenity, Black Hat 2024):<\/strong> Zenity\u2019s CTO demonstrated at Black Hat USA that Copilot processes manipulated emails autonomously &#8211; without the victim ever opening them. In the demo, Copilot replaced bank details in payment instructions and presented a fake Microsoft login page.<\/p>\n<blockquote style=\"border-left:4px solid #69d8ed;margin:32px 0;padding:20px 24px;background:#fafafa;border-radius:0 8px 8px 0;font-size:1.1em;line-height:1.6;color:#333;\"><p>\n&#8220;The BSI recommends conducting a risk analysis for the specific use case before integrating large AI language models into workflows.&#8221;<br \/>\n<cite style=\"display:block;margin-top:12px;font-size:0.8em;color:#888;font-style:normal;\">&#8211; BSI, <em>Generative AI Models: Opportunities and Risks<\/em>, paraphrased (May 2024)<\/cite>\n<\/p><\/blockquote>\n<h2>Oversharing: The Underestimated Everyday Risk<\/h2>\n<p>The spectacular vulnerabilities dominate headlines. But the biggest Copilot risk is mundane: <strong>excessive permissions in SharePoint, OneDrive, and Teams.<\/strong> Microsoft itself identifies oversharing as the most common risk category in Copilot deployments.<\/p>\n<p>The problem: Copilot uses the existing permission model. If a SharePoint folder is shared via an &#8220;Anyone&#8221; link, if orphaned workspaces with active permissions exist, if nested group permissions grant access to data the user never consciously saw &#8211; Copilot makes all of this instantly searchable and summarizable.<\/p>\n<p>An example: An employee asks Copilot, &#8220;What was discussed about Project Alpha last week?&#8221; Copilot searches all Teams chats, emails, and SharePoint documents the user has access to &#8211; including those in channels they never visited, in folders they never opened. If permissions are too broad, Copilot delivers confidential information the user would never have found without it.<\/p>\n<p>For German companies, this is particularly critical. Unlike in the U.S., where intra-company data access is often handled generously, the GDPR sets clear requirements for purpose limitation and data minimization. If Copilot summarizes salary negotiations from an HR channel for a marketing employee because Teams permissions were too broad, it\u2019s not just a security issue &#8211; it\u2019s a GDPR violation. Supervisory authorities in Bavaria and North Rhine-Westphalia have already signaled that AI-assisted data processing receives special scrutiny.<\/p>\n<p>Microsoft\u2019s own recommendation is unequivocal: Before rolling out Copilot, the permission model must be cleaned up. Microsoft\u2019s <em>Oversharing Assessment Blueprint<\/em> calls for enforcing the principle of least privilege, conducting regular permission reviews, and using SharePoint Advanced Management to identify risky sites. In practice, this means weeks to months of preparatory work before the first user should activate Copilot.<\/p>\n<div style=\"display:flex;flex-wrap:wrap;gap:12px;margin:32px 0;\">\n<div style=\"flex:1;min-width:160px;text-align:center;background:#f0f9fa;border-radius:10px;padding:24px 20px;border-top:3px solid #69d8ed;\">\n<div style=\"font-size:clamp(1.5em,5vw,2.4em);font-weight:800;color:#004a59;line-height:1;\">34 %<\/div>\n<div style=\"font-size:0.85em;margin-top:8px;color:#444;\">of German employees use AI outside IT<\/div>\n<\/div>\n<div style=\"flex:1;min-width:160px;text-align:center;background:#f0f9fa;border-radius:10px;padding:24px 20px;border-top:3px solid #69d8ed;\">\n<div style=\"font-size:clamp(1.5em,5vw,2.4em);font-weight:800;color:#004a59;line-height:1;\">23 %<\/div>\n<div style=\"font-size:0.85em;margin-top:8px;color:#444;\">of German companies have AI rules<\/div>\n<\/div>\n<div style=\"flex:1;min-width:160px;text-align:center;background:#f0f9fa;border-radius:10px;padding:24px 20px;border-top:3px solid #69d8ed;\">\n<div style=\"font-size:clamp(1.5em,5vw,2.4em);font-weight:800;color:#004a59;line-height:1;\">40 %<\/div>\n<div style=\"font-size:0.85em;margin-top:8px;color:#444;\">of companies expect an AI breach by 2027<\/div>\n<\/div>\n<\/div>\n<div style=\"text-align:center;font-size:12px;color:#888;margin-top:-20px;margin-bottom:24px;\">Sources: Bitkom 2024, Gartner 2025<\/div>\n<h2>EDPS Decision: When the EU Commission Itself Violates Data Protection Law<\/h2>\n<p>On March 8, 2024, the European Data Protection Supervisor (EDPS) officially determined that the EU Commission violated Regulation (EU) 2018\/1725 when using Microsoft 365. The core findings: insufficient specification of which personal data Microsoft collects and for what purposes, and no adequate guarantees for data transfers outside the EU\/European Economic Area.<\/p>\n<p>The measure was drastic: Starting December 9, 2024, all data transfers to Microsoft outside the EU\/EEA had to be suspended. The EU Commission has since remedied the violations &#8211; the EDPS closed the investigation in July 2025. But the decision remains a precedent: If the EU Commission itself violates data protection laws with Microsoft M365, what does that mean for a medium-sized company running the same software with fewer resources?<\/p>\n<p>For DACH companies, this is a concrete call to action. A data protection impact assessment (DPIA) under Article 35 GDPR is strongly recommended by data protection lawyers when deploying Copilot. And while Microsoft has announced in-country data processing for Germany by November 2025, whether this fully addresses GDPR concerns remains a matter of debate among legal experts.<\/p>\n<p>In November 2025, the BSI published a specific warning about <em>&#8220;Evasion Attacks on AI Language Models.&#8221;<\/em> The recommendations: precise system prompts, filtering of malicious content in third-party documents, and explicit user confirmation before any action executed by an LLM. CERT-Bund issued a specific advisory on M365 Copilot under WID-SEC-2025-1746. For companies subject to NIS2 or KRITIS regulations, these BSI recommendations are not optional &#8211; they are part of the duty of care that, if neglected, can trigger personal liability for management.<\/p>\n<h2>Shadow AI: The Problem Growing Faster Than Policy<\/h2>\n<p>While IT departments debate Copilot rollouts, employees have long since created facts on the ground. According to Bitkom, <strong>34 percent of German employees<\/strong> use generative AI tools with private accounts outside corporate IT. In 8 percent of companies, shadow AI is widespread &#8211; a doubling compared to the previous year. And only 23 percent of companies have rules for AI use.<\/p>\n<p>The global picture is no better: According to a WalkMe survey of 12,000 knowledge workers, 60 percent use AI tools at work, but only 18.5 percent are aware of an official AI policy from their employer. 38 percent share confidential data with AI platforms without authorization.<\/p>\n<p>Gartner predicts: By 2030, more than <strong>40 percent of global companies<\/strong> will experience security or compliance incidents due to unauthorized AI tools. The forecast for 2027 is more specific: 40 percent of all AI data breaches will result from cross-border misuse of generative AI.<\/p>\n<h2>Why Copilot Is More Dangerous Than ChatGPT<\/h2>\n<p>Many companies equate Copilot with ChatGPT &#8211; a chatbot for writing texts. This is a fundamental misunderstanding. ChatGPT works with the data the user inputs. Copilot works with <strong>all data the user has access to<\/strong> &#8211; plus all data in shared resources they could theoretically access.<\/p>\n<p>The practical difference: If an employee feeds ChatGPT an internal document, it\u2019s a conscious decision &#8211; problematic, but traceable. When Copilot autonomously searches emails, aggregates Teams chats, and summarizes SharePoint documents, it happens automatically and invisibly. The user often doesn\u2019t even know which sources Copilot uses to compile its answers.<\/p>\n<p>This has consequences for the attack surface. With ChatGPT, an attacker must trick the user into making a manipulated input. With Copilot, it\u2019s enough to send a manipulated email or place a crafted document in a shared folder. Copilot pulls the content autonomously &#8211; that\u2019s the mechanism Zenity\u2019s CTO demonstrated at Black Hat and that Johann Rehberger took to its logical conclusion with ASCII smuggling.<\/p>\n<p>For security teams, this means: <a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/24\/post_id-5561\/\">Threat modeling<\/a> for Copilot is fundamentally different than for other AI tools. It\u2019s not enough to train users. The entire data landscape must be secured &#8211; because Copilot finds every gap and serves it up to the user on a silver platter.<\/p>\n<p>Microsoft has recognized this and has offered in-country data processing for 15 countries since late 2025, including Germany. This partially solves the data transfer problem. But it doesn\u2019t solve the oversharing problem or the prompt injection attacks that occur within the tenant\u2019s own boundaries.<\/p>\n<h2>What Should Be Mandatory Before a Copilot Rollout<\/h2>\n<p><strong>1. Permission audit before rollout.<\/strong> <a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/20\/post_id-5461\/\">Microsoft recommends SharePoint Advanced Management<\/a> for a systematic permission review. Eliminate &#8220;Anyone&#8221; links, identify orphaned workspaces, and resolve nested group permissions. No Copilot access without a completed permission audit.<\/p>\n<p><strong>2. Comprehensive sensitivity labels.<\/strong> Apply Microsoft Purview Information Protection labels to all documents and emails. Without labels, there\u2019s no effective DLP control &#8211; and Copilot ignores what isn\u2019t labeled.<\/p>\n<p><strong>3. Configure DLP policies for Copilot.<\/strong> Since 2025, Microsoft Purview DLP has explicitly supported the &#8220;Microsoft 365 Copilot&#8221; location. Without these policies, Copilot operates on corporate data without guardrails.<\/p>\n<p><strong>4. Pilot group before organization-wide rollout.<\/strong> Maximum of 50 users in a controlled environment. Monitoring: Which data does Copilot display that it shouldn\u2019t? Only scale once the answer is &#8220;none.&#8221;<\/p>\n<p><strong>5. Conduct a GDPR impact assessment.<\/strong> Article 35 GDPR requires a data protection impact assessment when processing is likely to entail high risks. An AI system that accesses all corporate data meets this criterion.<\/p>\n<p><strong>6. Establish an AI usage policy.<\/strong> Clear rules: Which tools are permitted, which data may be input, how is shadow AI handled? The Bitkom figures show: Without a policy, everyone does as they please.<\/p>\n<p>The interplay between official Copilot rollouts and unofficial shadow AI creates a double attack surface. On one hand, the controlled Copilot instances with their documented vulnerabilities. On the other, the uncontrolled private AI tools into which employees input customer data, contract details, and internal strategy papers &#8211; without audit, without logging, without any possibility of traceability. For CISOs, this is a nightmare scenario: Neither the official nor the unofficial AI channel can be fully secured.<\/p>\n<blockquote style=\"border-left:4px solid #69d8ed;margin:32px 0;padding:20px 24px;background:#fafafa;border-radius:0 8px 8px 0;font-size:1.1em;line-height:1.6;color:#333;\"><p>\n&#8220;According to Gartner, AI-powered attacks are the top emerging risk for companies &#8211; three quarters in a row.&#8221;<br \/>\n<cite style=\"display:block;margin-top:12px;font-size:0.8em;color:#888;font-style:normal;\">&#8211; <em>Gartner Q3 2024 Emerging Risks Survey<\/em>, paraphrased<\/cite>\n<\/p><\/blockquote>\n<p>The crucial point: Copilot isn\u2019t a security problem in itself. It\u2019s a multiplier. Those who have their permissions under control benefit. Those who don\u2019t give an AI access to everything that\u2019s poorly secured &#8211; and make it searchable, summarizable, and exportable. Preparing for Copilot isn\u2019t an IT task. It\u2019s a security task.<\/p>\n<p>A final thought for context: Microsoft isn\u2019t the enemy. Copilot is a productive tool that, with proper preparation, delivers real added value. But preparation is the key &#8211; and most companies underestimate the effort. The EchoLeak vulnerability, the EDPS decision, and the Bitkom figures on shadow AI all point in the same direction: Deploying AI assistants without first getting permissions, data protection architecture, and usage policies in order is negligent. And under NIS2, negligence in IT security can have personal consequences for management.<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<h3>Is Microsoft 365 Copilot insecure?<\/h3>\n<p>Copilot itself isn\u2019t inherently insecure. But it amplifies existing permission issues &#8211; it makes data accessible that users would never have found without Copilot. The documented vulnerabilities (EchoLeak, ASCII smuggling, label bypass) were patched by Microsoft, but they show the system is attackable.<\/p>\n<h3>What is EchoLeak (CVE-2025-32711)?<\/h3>\n<p>The first documented zero-click vulnerability in a production AI system. CVSS score 9.3 out of 10. Attackers could exfiltrate data from the Copilot context without the user having to do anything. Microsoft patched the flaw in May 2025.<\/p>\n<h3>What did the EDPS decide regarding Microsoft 365 usage?<\/h3>\n<p>In March 2024, the European Data Protection Supervisor determined that the EU Commission violated data protection laws when using M365. The violations were remedied by July 2025, but the decision remains relevant as a precedent for all EU organizations using M365.<\/p>\n<h3>Do I need a GDPR impact assessment for Copilot?<\/h3>\n<p>Data protection lawyers strongly recommend conducting a data protection impact assessment under Article 35 GDPR, as Copilot processes personal data and accesses broad corporate data. Microsoft\u2019s Data Processing Addendum contains no special provisions for AI\/Copilot.<\/p>\n<h3>How widespread is shadow AI in German companies?<\/h3>\n<p>34 percent of German employees use generative AI tools with private accounts outside corporate IT (<em>Bitkom 2024<\/em>). In 8 percent of companies, shadow AI is widespread. Only 23 percent have rules for AI use.<\/p>\n<h3>What must I do before a Copilot rollout?<\/h3>\n<p>At minimum: Conduct a permission audit in SharePoint\/OneDrive\/Teams, apply sensitivity labels comprehensively, configure DLP policies for Copilot, conduct a GDPR impact assessment, and establish an AI usage policy. Microsoft recommends a pilot test with a maximum of 50 users.<\/p>\n<h3>Can Copilot access confidential emails?<\/h3>\n<p>Yes &#8211; if permissions allow it. In January 2026, a bug was documented where Copilot accessed protected emails in Sent Items and Drafts despite confidentiality labels. Microsoft released an emergency patch.<\/p>\n<div style=\"background:#f0f8ff;border-radius:8px;padding:20px 24px;margin:24px 0;border-top:3px solid #69d8ed;\">\n<h2 style=\"margin-top:0;margin-bottom:12px;font-size:1.05em;\">Editor\u2019s Reading Recommendations<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/24\/post_id-5569\/\">Supply Chain Attack on Trivy: When the Security Scanner Itself Becomes a Weapon<\/a><\/li>\n<li><a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/19\/post_id-5412\/\">Identity Attacks 2026: Why Hackers Log In Instead of Breaking In<\/a><\/li>\n<li><a href=\"https:\/\/www.securitytoday.de\/en\/2026\/03\/24\/nis2-in-deutschland-was-unternehmen-jetzt-wissen-muessen\/\">NIS2 in Germany: What Companies Need to Know Now<\/a><\/li>\n<\/ul>\n<\/div>\n<div style=\"background:#f0f8ff;border-radius:8px;padding:20px 24px;margin:24px 0;border-top:3px solid #69d8ed;\">\n<h2 style=\"margin-top:0;margin-bottom:12px;font-size:1.05em;\">More from the MBF Media Network<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.cloudmagazin.com\">cloudmagazin &#8211; Cloud, SaaS, and IT Infrastructure<\/a><\/li>\n<li><a href=\"https:\/\/www.mybusinessfuture.com\">MyBusinessFuture &#8211; Digitalization and AI<\/a><\/li>\n<li><a href=\"https:\/\/www.digital-chiefs.de\">Digital Chiefs &#8211; C-Level Thought Leadership<\/a><\/li>\n<\/ul>\n<\/div>\n<p style=\"text-align:right;font-style:italic;color:#888;margin-top:32px;\">Header Image Source: Pexels \/ cottonbro studio (px:6153354)<\/p>\n","protected":false},"excerpt":{"rendered":"8 min Reading Time Microsoft 365 Copilot has a zero-click vulnerability with a CVSS score of 9.3. The European Data Protection Supervisor (EDPS) has reprimanded the EU Commission for its use of M365. And 34 percent of German employees use AI tools outside of corporate IT. Three facts, one problem: Companies are rolling out AI [&hellip;]","protected":false},"author":55,"featured_media":5571,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"copilot","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"Copilot security risk: Zero-click vulnerability (CVSS 9.3) leaks corporate data. Learn how to protect your organization\u2014read now and act today.","_yoast_wpseo_meta-robots-noindex":"","_yoast_wpseo_meta-robots-nofollow":"","_yoast_wpseo_meta-robots-adv":"","_yoast_wpseo_canonical":"","_yoast_wpseo_opengraph-title":"","_yoast_wpseo_opengraph-description":"","_yoast_wpseo_opengraph-image":"","_yoast_wpseo_opengraph-image-id":"","_yoast_wpseo_twitter-title":"","_yoast_wpseo_twitter-description":"","_yoast_wpseo_twitter-image":"","_yoast_wpseo_twitter-image-id":"","footnotes":""},"categories":[251],"tags":[],"class_list":["post-7736","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"wpml_language":"en","wpml_translation_of":5572,"_links":{"self":[{"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/posts\/7736","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/users\/55"}],"replies":[{"embeddable":true,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/comments?post=7736"}],"version-history":[{"count":3,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/posts\/7736\/revisions"}],"predecessor-version":[{"id":10222,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/posts\/7736\/revisions\/10222"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/media\/5571"}],"wp:attachment":[{"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/media?parent=7736"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/categories?post=7736"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.securitytoday.de\/en\/wp-json\/wp\/v2\/tags?post=7736"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}