Copilot Can Be Weaponized: What CVE-2026-26133 Means for Microsoft 365 Admins

Most phishing attacks rely on the same old toolkit: a convincing sender address, a malicious attachment, maybe a link dressed up with a familiar domain. Users have been trained – sometimes painfully – to be sceptical of those signals. But what happens when the attack arrives through a summary generated by your own AI assistant?

That is exactly what CVE-2026-26133 demonstrates. Discovered by Permiso Security researcher Andi Ahmeti and patched by Microsoft on March 11, 2026, this vulnerability exposed a cross-prompt injection flaw in Microsoft 365 Copilot’s email and Teams summarization features. No macros. No attachments. Just a carefully crafted email – and Copilot does the rest.

Source:https://permiso.io/blog/copilot-prompt-injection-ai-email-phishing

What Is a Cross-Prompt Injection Attack?

To understand why this matters, it helps to understand the attack first.

Large language models like the one powering Microsoft 365 Copilot do not inherently distinguish between “instructions from the system” and “text from an email in your inbox.” When Copilot summarizes an email, it ingests that email’s content as part of its input. If an attacker embeds instructions inside that content – instructions that look like legitimate directives to the model – the model may execute them.

This is called a cross-prompt injection attack. The attacker does not need access to your tenant, your credentials, or any Microsoft system. They need to get an email into someone’s inbox. That is a bar most threat actors cleared years ago…. pretty scare when you think about it.

CVE-2026-26133 In action

Ahmeti’s research showed that attackers could craft emails containing hidden prompts designed to steer Copilot’s output. Instead of receiving an accurate summary of an incoming message, the target user would see Copilot-generated content shaped by the attacker – fake security alerts, spoofed internal notifications, or malicious links presented within what appeared to be a trusted AI response.

The attack surface was broader than just email. Testing across three Copilot surfaces revealed different levels of exposure:

Outlook Summarize button – Occasionally leaked injected commands, particularly when the email contained sufficient natural padding to obscure the prompt from basic filters.

Outlook Copilot Pane – Generally more cautious, but still vulnerable under specific conditions.

Teams Copilot – The most consistently exploitable surface. Attacker-shaped summaries were reliably produced, and in some configurations Copilot could be made to exfiltrate internal data – Teams messages, SharePoint file references – via seemingly legitimate prompts.

The mechanism exploits something fundamental: users trust AI-generated summaries. When Copilot tells you what an email says, you are far less likely to scrutinise that output than you would the raw email itself. The attacker does not need to fool you directly. They need to fool Copilot, and let Copilot fool you…. in AI we trust 🙂

Timeline: From Disclosure to Patch

Well this is patched, i dnot care… well maybe you should

The vulnerability was disclosed to Microsoft in January 2026. Microsoft confirmed the issue on January 28 and worked through a phased patch deployment across all affected surfaces, completing the rollout on March 11, 2026.

If your organization uses Microsoft 365 Copilot and has not reviewed activity logs from January through mid-March, this is worth adding to the backlog. The window was open for nearly six weeks before patching was complete.

What Admins Should Do Now

Microsoft has patched CVE-2026-26133 at the platform level, so there is no admin-side configuration change required for the specific vulnerability. But the attack exposed something that a patch alone does not fix: Copilot’s effectiveness as a threat amplifier in an environment with poor hygiene.

I would still take a look at my CoPilot settings in the tenant, an often overlooked place when i talk to customers.

For instance, how does your Data security look under settings ? This a really good place to start

Here is where to ALSO focus:

Enforce Purview sensitivity labels on Copilot interactions. Microsoft Purview Data Loss Prevention for Microsoft 365 Copilot reached general availability earlier this year. It allows you to block Copilot from processing files carrying specific sensitivity labels – a direct control against data being surfaced or exfiltrated through Copilot responses. If you have not configured this, it should be a priority….
Its relatively easy
1. create a label – with good explanation
2. Publish it
3. Create the DLP policy in Purview

And the user will see this when they try to utilize protected material.


Audit SharePoint permissions before expanding Copilot access. This has been told time and time again by almost all that care about security in relation AI.
Copilot is only as safe as the permissions model underneath it. Run the SharePoint data access governance reports in the admin center to identify overshared sites and trigger site access reviews for the worst offenders.

Review Copilot plugin approvals. Third-party plugins that connect to Copilot can pull data into responses outside of what users expect. The admin center allows you to restrict which plugins are available – apply the principle of least privilege here as you would anywhere else.

Monitor Copilot activity logs. Unified audit logging in Microsoft Purview captures Copilot interactions. Look for unusual patterns: broad content retrievals, activity at unusual hours, queries that do not match user roles. The signals are there if you look for them.

Think carefully about agentic Copilot deployment. The attack surface shifts substantially when Copilot moves from summarization to action – writing emails, creating calendar entries, triggering workflows. Before enabling agentic capabilities in production, map the data and action boundaries explicitly.
By being agentic, Copilot can not only FIND data it shouldnt, it can even share it 🙂

Takeaways

CVE-2026-26133 is patched. The specific attack Ahmeti documented no longer works the way it did in January. But the vulnerability was only possible because Copilot, by design, trusts the content it reads. That is not a bug Microsoft can fully patch – it is a fundamental property of how large language models process input.

What this means practically: as Copilot becomes more capable and more embedded in daily workflows, the value of finding and exploiting these injection paths increases. Researchers found this one. Threat actors will keep looking.

The organizations that are well-positioned are the ones that treated Copilot deployment as a security project from the start – not an IT rollout that security reviews later. If that ship has already sailed in your tenant, now is a reasonable time to reassess.

Relevant Microsoft documentation: Microsoft Purview DLP for Copilot | Copilot activity logs in Purview | SharePoint data access governance

Enjoy your AI powered day. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *