Blog
July 2, 2025

Why Microsoft Purview Can't Protect Your Organization From M365 Copilot Security Risks

Microsoft Purview inadequately protects organisations from M365 Copilot security risks. Copilot amplifies permission errors by making SharePoint data easily searchable, exposing sensitive information. Purview can't detect prompt injection attacks and only reacts after breaches occur. With AI-related security incidents rising from 27% to 40%, most organisations deployed Copilot without fixing underlying permission issues. While Purview identifies problems, fixing years of poor security management remains manual work for overwhelmed teams.

Download
Download

Why Microsoft Purview Can't Protect Your Organization From M365 Copilot Security Risks

TL;DR

Microsoft Purview isn't equipped to handle the unique security challenges of implementing Gen AI into your organization. Metomic's 2025 CISO survey found that 67% of security teams worry about AI exposing sensitive data, and they are right to, particularly as Copilot only serves to amplify every existing permission error by turning your SharePoint sprawl into an easily searchable database. Purview can't catch sophisticated prompt injection attacks, misses when Copilot pulls sensitive data into chat sessions, and only reacts after damage is done. The bottom line is this: 40% of data security incidents are now AI-related (up from 27% in 2023), with breach costs hitting $4.88 million. Most of the 37,000+ organizations using Copilot deployed without fixing underlying security issues. Purview helps identify problems but can't fix years of poor permission management; unfortunately, that manual work falls entirely on security teams already overwhelmed by threats that didn't exist several months ago.

How Does Copilot Amplify Your Existing Permission Problems?

What happened: Because of the power and speed with which AI can proactively surface content, generative AI amplifies the problem and risk of oversharing or leaking data. Companies are discovering that Copilot doesn't just find information it weaponizes every permission mistake you've ever made.

What does this look like: Sarah from HR asks Copilot to "summarize our Q3 performance." Instead of just seeing her team's metrics, she gets the entire company's financial data because someone in Finance accidentally shared the board deck with "Everyone." Copilot can surface files that were shared via 'Anyone' links, but only if the receiving users have opened the link at least once.

Why does this matter: One of the primary concerns with Microsoft Copilot is its potential for over permissioning, which can lead to unintended data access across an organisation. Your existing SharePoint sprawl just became a searchable database accessible to anyone with a Copilot license. Copilot for Microsoft 365 is now part of the day-to-day tech stack for over 37,000 organisations, with around a million paying customers but most rushed deployment without fixing underlying permission issues.

Can Purview Detect Advanced Prompt Injection Attacks?

What happened: Security researcher Johann Rehberger uncovered a vulnerability in Microsoft 365 Copilot that allowed the theft of a user's emails and other personal information. The exploit combines prompt injection via a malicious email, automatic tool invocation to read other emails or documents, and ASCII Smuggling to stage data for exfiltration.

What does this look like: An attacker sends your CEO an innocent-looking email about quarterly earnings. Hidden in the document are invisible instructions that tell Copilot to search for sensitive emails and secretly encode the data into clickable links. Copilot searches for Slack MFA codes because an email it analyzed said so! When the CEO clicks what looks like a normal link, company secrets are heading to the attacker's server.

Why does this matter: Microsoft Defender and Purview don't have those capabilities today to detect "prompt-ware" attacks. They have some user behavior analytics, which is helpful, but something like this is very surgical, where somebody has a payload, they send you the payload, and the defenses aren't going to spot it.

What Happens When You Have Thousands of Sites to Secure?

What happened: One of the concerns I often talk to organisations about is the fear that Copilot might surface sensitive information that it should not have access to due to IT/Compliance teams not really knowing who has access to what. Organisations are realizing that Copilot forces them to confront years of poor permission management.

What does this look like: Your security team discovers that 40% of SharePoint sites have "Everyone" permissions. Anyone in the organisation can discover and join your public SharePoint sites, teams, and communities without prior approval. The contents of public sites may be picked up by Copilot, even if they're not members of the group. Suddenly, Copilot is democratizing access to data that was "secure through obscurity."

Why does this matter: You might have hundreds or thousands of SharePoint sites to assess and right-size information access. Purview's Data Security Posture Management can help identify the mess, but fixing it requires manually reviewing every site, every permission, every link.

Are Purview's Security Controls Keeping Up With AI Threats?

What happened: Gen AI introduces new security and safety risks that necessitate the implementation of additional controls. For example, malicious users may execute prompt injection attacks to induce unauthorized behaviors from GenAI. While Microsoft has built some guardrails into Copilot, security teams are discovering massive blind spots.

What does this look like: Your Purview dashboard shows everything is "compliant," but employees are using Copilot to accidentally leak customer data in summarization requests or falling victim to conditional prompt injections that only activate for specific users. Imagine a malicious email with instructions for an LLM that only activates when the CEO looks at it.

Why does this matter: Microsoft 365 Copilot uses AI-based classifiers and content filters to flag different types of potentially harmful content in user prompts or generated responses, but these are reactive, not proactive. By the time Purview detects a problem, the damage is often done.

How Complete Are Your Audit Trails Really?

What happened: Security teams assume Purview's audit logs will catch everything, but Copilot interactions create unique visibility challenges that traditional auditing wasn't designed for.

What does this look like: Your compliance officer asks for a report on who accessed the M&A documents. The audit trail shows normal SharePoint access, but misses that Copilot pulled excerpts from those documents into 47 different chat sessions across the company.

Why does this matter: Audit logs generated by Microsoft 365 Copilot can be retained for up to 180 days for Audit (Standard) customers and up to one year for Audit (Premium) license holders. But knowing something happened and understanding the full impact are two different things.

How Can Organisations Mitigate These Risks?

Purview isn't useless, it's essential. But it's not enough. Here's your survival checklist:

Before Deployment:

  • Use the SharePoint Active sites list where you can sort by activity to discover which SharePoint sites should be universally accessible
  • Use Restricted SharePoint Search to temporarily limit organisation-wide search and Copilot experiences to selected SharePoint sites
  • Implement the principle of least access no more "Everyone" permissions

During Deployment:

  • Use SharePoint Advanced Management to create an inactive site policy to automatically manage and reduce inactive sites
  • Set up DLP policies specifically for Copilot interactions using sensitivity labels that apply encryption
  • Train users to recognize prompt injection attempts

After Deployment:

  • Monitor Copilot audit logs obsessively
  • Continually monitor your data security risks using oversharing reports
  • Accept that prompt injection detection is still largely manual

The Bottom Line

According to Microsoft's 2024 Data Security Index, 40% of data security incidents in enterprises were linked to AI systems and tools. This figure marks a stark rise from 27% in 2023. Copilot will expose every security shortcut you've ever taken. Purview can help you see the problems and apply some band-aids, but the real work is fixing permissions, training users, and constantly monitoring for new attack vectors.

The uncomfortable truth? Most organisations aren't ready. They're going to deploy anyway. And security teams are going to spend the next year playing defense against threats that didn't exist several months ago.

Welcome to the AI security era. Hope you're ready for overtime.

For organizations serious about securing their AI deployments, Metomic provides comprehensive data security solutions that work alongside Purview to deliver real-time monitoring, automated sensitive data discovery, and proactive protection against AI-related data exposure risks. Metomic helps security teams gain the visibility and control needed to safely deploy AI tools without compromising data security or compliance.

Why Microsoft Purview Can't Protect Your Organization From M365 Copilot Security Risks

TL;DR

Microsoft Purview isn't equipped to handle the unique security challenges of implementing Gen AI into your organization. Metomic's 2025 CISO survey found that 67% of security teams worry about AI exposing sensitive data, and they are right to, particularly as Copilot only serves to amplify every existing permission error by turning your SharePoint sprawl into an easily searchable database. Purview can't catch sophisticated prompt injection attacks, misses when Copilot pulls sensitive data into chat sessions, and only reacts after damage is done. The bottom line is this: 40% of data security incidents are now AI-related (up from 27% in 2023), with breach costs hitting $4.88 million. Most of the 37,000+ organizations using Copilot deployed without fixing underlying security issues. Purview helps identify problems but can't fix years of poor permission management; unfortunately, that manual work falls entirely on security teams already overwhelmed by threats that didn't exist several months ago.

How Does Copilot Amplify Your Existing Permission Problems?

What happened: Because of the power and speed with which AI can proactively surface content, generative AI amplifies the problem and risk of oversharing or leaking data. Companies are discovering that Copilot doesn't just find information it weaponizes every permission mistake you've ever made.

What does this look like: Sarah from HR asks Copilot to "summarize our Q3 performance." Instead of just seeing her team's metrics, she gets the entire company's financial data because someone in Finance accidentally shared the board deck with "Everyone." Copilot can surface files that were shared via 'Anyone' links, but only if the receiving users have opened the link at least once.

Why does this matter: One of the primary concerns with Microsoft Copilot is its potential for over permissioning, which can lead to unintended data access across an organisation. Your existing SharePoint sprawl just became a searchable database accessible to anyone with a Copilot license. Copilot for Microsoft 365 is now part of the day-to-day tech stack for over 37,000 organisations, with around a million paying customers but most rushed deployment without fixing underlying permission issues.

Can Purview Detect Advanced Prompt Injection Attacks?

What happened: Security researcher Johann Rehberger uncovered a vulnerability in Microsoft 365 Copilot that allowed the theft of a user's emails and other personal information. The exploit combines prompt injection via a malicious email, automatic tool invocation to read other emails or documents, and ASCII Smuggling to stage data for exfiltration.

What does this look like: An attacker sends your CEO an innocent-looking email about quarterly earnings. Hidden in the document are invisible instructions that tell Copilot to search for sensitive emails and secretly encode the data into clickable links. Copilot searches for Slack MFA codes because an email it analyzed said so! When the CEO clicks what looks like a normal link, company secrets are heading to the attacker's server.

Why does this matter: Microsoft Defender and Purview don't have those capabilities today to detect "prompt-ware" attacks. They have some user behavior analytics, which is helpful, but something like this is very surgical, where somebody has a payload, they send you the payload, and the defenses aren't going to spot it.

What Happens When You Have Thousands of Sites to Secure?

What happened: One of the concerns I often talk to organisations about is the fear that Copilot might surface sensitive information that it should not have access to due to IT/Compliance teams not really knowing who has access to what. Organisations are realizing that Copilot forces them to confront years of poor permission management.

What does this look like: Your security team discovers that 40% of SharePoint sites have "Everyone" permissions. Anyone in the organisation can discover and join your public SharePoint sites, teams, and communities without prior approval. The contents of public sites may be picked up by Copilot, even if they're not members of the group. Suddenly, Copilot is democratizing access to data that was "secure through obscurity."

Why does this matter: You might have hundreds or thousands of SharePoint sites to assess and right-size information access. Purview's Data Security Posture Management can help identify the mess, but fixing it requires manually reviewing every site, every permission, every link.

Are Purview's Security Controls Keeping Up With AI Threats?

What happened: Gen AI introduces new security and safety risks that necessitate the implementation of additional controls. For example, malicious users may execute prompt injection attacks to induce unauthorized behaviors from GenAI. While Microsoft has built some guardrails into Copilot, security teams are discovering massive blind spots.

What does this look like: Your Purview dashboard shows everything is "compliant," but employees are using Copilot to accidentally leak customer data in summarization requests or falling victim to conditional prompt injections that only activate for specific users. Imagine a malicious email with instructions for an LLM that only activates when the CEO looks at it.

Why does this matter: Microsoft 365 Copilot uses AI-based classifiers and content filters to flag different types of potentially harmful content in user prompts or generated responses, but these are reactive, not proactive. By the time Purview detects a problem, the damage is often done.

How Complete Are Your Audit Trails Really?

What happened: Security teams assume Purview's audit logs will catch everything, but Copilot interactions create unique visibility challenges that traditional auditing wasn't designed for.

What does this look like: Your compliance officer asks for a report on who accessed the M&A documents. The audit trail shows normal SharePoint access, but misses that Copilot pulled excerpts from those documents into 47 different chat sessions across the company.

Why does this matter: Audit logs generated by Microsoft 365 Copilot can be retained for up to 180 days for Audit (Standard) customers and up to one year for Audit (Premium) license holders. But knowing something happened and understanding the full impact are two different things.

How Can Organisations Mitigate These Risks?

Purview isn't useless, it's essential. But it's not enough. Here's your survival checklist:

Before Deployment:

  • Use the SharePoint Active sites list where you can sort by activity to discover which SharePoint sites should be universally accessible
  • Use Restricted SharePoint Search to temporarily limit organisation-wide search and Copilot experiences to selected SharePoint sites
  • Implement the principle of least access no more "Everyone" permissions

During Deployment:

  • Use SharePoint Advanced Management to create an inactive site policy to automatically manage and reduce inactive sites
  • Set up DLP policies specifically for Copilot interactions using sensitivity labels that apply encryption
  • Train users to recognize prompt injection attempts

After Deployment:

  • Monitor Copilot audit logs obsessively
  • Continually monitor your data security risks using oversharing reports
  • Accept that prompt injection detection is still largely manual

The Bottom Line

According to Microsoft's 2024 Data Security Index, 40% of data security incidents in enterprises were linked to AI systems and tools. This figure marks a stark rise from 27% in 2023. Copilot will expose every security shortcut you've ever taken. Purview can help you see the problems and apply some band-aids, but the real work is fixing permissions, training users, and constantly monitoring for new attack vectors.

The uncomfortable truth? Most organisations aren't ready. They're going to deploy anyway. And security teams are going to spend the next year playing defense against threats that didn't exist several months ago.

Welcome to the AI security era. Hope you're ready for overtime.

For organizations serious about securing their AI deployments, Metomic provides comprehensive data security solutions that work alongside Purview to deliver real-time monitoring, automated sensitive data discovery, and proactive protection against AI-related data exposure risks. Metomic helps security teams gain the visibility and control needed to safely deploy AI tools without compromising data security or compliance.

Why Microsoft Purview Can't Protect Your Organization From M365 Copilot Security Risks

TL;DR

Microsoft Purview isn't equipped to handle the unique security challenges of implementing Gen AI into your organization. Metomic's 2025 CISO survey found that 67% of security teams worry about AI exposing sensitive data, and they are right to, particularly as Copilot only serves to amplify every existing permission error by turning your SharePoint sprawl into an easily searchable database. Purview can't catch sophisticated prompt injection attacks, misses when Copilot pulls sensitive data into chat sessions, and only reacts after damage is done. The bottom line is this: 40% of data security incidents are now AI-related (up from 27% in 2023), with breach costs hitting $4.88 million. Most of the 37,000+ organizations using Copilot deployed without fixing underlying security issues. Purview helps identify problems but can't fix years of poor permission management; unfortunately, that manual work falls entirely on security teams already overwhelmed by threats that didn't exist several months ago.

How Does Copilot Amplify Your Existing Permission Problems?

What happened: Because of the power and speed with which AI can proactively surface content, generative AI amplifies the problem and risk of oversharing or leaking data. Companies are discovering that Copilot doesn't just find information it weaponizes every permission mistake you've ever made.

What does this look like: Sarah from HR asks Copilot to "summarize our Q3 performance." Instead of just seeing her team's metrics, she gets the entire company's financial data because someone in Finance accidentally shared the board deck with "Everyone." Copilot can surface files that were shared via 'Anyone' links, but only if the receiving users have opened the link at least once.

Why does this matter: One of the primary concerns with Microsoft Copilot is its potential for over permissioning, which can lead to unintended data access across an organisation. Your existing SharePoint sprawl just became a searchable database accessible to anyone with a Copilot license. Copilot for Microsoft 365 is now part of the day-to-day tech stack for over 37,000 organisations, with around a million paying customers but most rushed deployment without fixing underlying permission issues.

Can Purview Detect Advanced Prompt Injection Attacks?

What happened: Security researcher Johann Rehberger uncovered a vulnerability in Microsoft 365 Copilot that allowed the theft of a user's emails and other personal information. The exploit combines prompt injection via a malicious email, automatic tool invocation to read other emails or documents, and ASCII Smuggling to stage data for exfiltration.

What does this look like: An attacker sends your CEO an innocent-looking email about quarterly earnings. Hidden in the document are invisible instructions that tell Copilot to search for sensitive emails and secretly encode the data into clickable links. Copilot searches for Slack MFA codes because an email it analyzed said so! When the CEO clicks what looks like a normal link, company secrets are heading to the attacker's server.

Why does this matter: Microsoft Defender and Purview don't have those capabilities today to detect "prompt-ware" attacks. They have some user behavior analytics, which is helpful, but something like this is very surgical, where somebody has a payload, they send you the payload, and the defenses aren't going to spot it.

What Happens When You Have Thousands of Sites to Secure?

What happened: One of the concerns I often talk to organisations about is the fear that Copilot might surface sensitive information that it should not have access to due to IT/Compliance teams not really knowing who has access to what. Organisations are realizing that Copilot forces them to confront years of poor permission management.

What does this look like: Your security team discovers that 40% of SharePoint sites have "Everyone" permissions. Anyone in the organisation can discover and join your public SharePoint sites, teams, and communities without prior approval. The contents of public sites may be picked up by Copilot, even if they're not members of the group. Suddenly, Copilot is democratizing access to data that was "secure through obscurity."

Why does this matter: You might have hundreds or thousands of SharePoint sites to assess and right-size information access. Purview's Data Security Posture Management can help identify the mess, but fixing it requires manually reviewing every site, every permission, every link.

Are Purview's Security Controls Keeping Up With AI Threats?

What happened: Gen AI introduces new security and safety risks that necessitate the implementation of additional controls. For example, malicious users may execute prompt injection attacks to induce unauthorized behaviors from GenAI. While Microsoft has built some guardrails into Copilot, security teams are discovering massive blind spots.

What does this look like: Your Purview dashboard shows everything is "compliant," but employees are using Copilot to accidentally leak customer data in summarization requests or falling victim to conditional prompt injections that only activate for specific users. Imagine a malicious email with instructions for an LLM that only activates when the CEO looks at it.

Why does this matter: Microsoft 365 Copilot uses AI-based classifiers and content filters to flag different types of potentially harmful content in user prompts or generated responses, but these are reactive, not proactive. By the time Purview detects a problem, the damage is often done.

How Complete Are Your Audit Trails Really?

What happened: Security teams assume Purview's audit logs will catch everything, but Copilot interactions create unique visibility challenges that traditional auditing wasn't designed for.

What does this look like: Your compliance officer asks for a report on who accessed the M&A documents. The audit trail shows normal SharePoint access, but misses that Copilot pulled excerpts from those documents into 47 different chat sessions across the company.

Why does this matter: Audit logs generated by Microsoft 365 Copilot can be retained for up to 180 days for Audit (Standard) customers and up to one year for Audit (Premium) license holders. But knowing something happened and understanding the full impact are two different things.

How Can Organisations Mitigate These Risks?

Purview isn't useless, it's essential. But it's not enough. Here's your survival checklist:

Before Deployment:

  • Use the SharePoint Active sites list where you can sort by activity to discover which SharePoint sites should be universally accessible
  • Use Restricted SharePoint Search to temporarily limit organisation-wide search and Copilot experiences to selected SharePoint sites
  • Implement the principle of least access no more "Everyone" permissions

During Deployment:

  • Use SharePoint Advanced Management to create an inactive site policy to automatically manage and reduce inactive sites
  • Set up DLP policies specifically for Copilot interactions using sensitivity labels that apply encryption
  • Train users to recognize prompt injection attempts

After Deployment:

  • Monitor Copilot audit logs obsessively
  • Continually monitor your data security risks using oversharing reports
  • Accept that prompt injection detection is still largely manual

The Bottom Line

According to Microsoft's 2024 Data Security Index, 40% of data security incidents in enterprises were linked to AI systems and tools. This figure marks a stark rise from 27% in 2023. Copilot will expose every security shortcut you've ever taken. Purview can help you see the problems and apply some band-aids, but the real work is fixing permissions, training users, and constantly monitoring for new attack vectors.

The uncomfortable truth? Most organisations aren't ready. They're going to deploy anyway. And security teams are going to spend the next year playing defense against threats that didn't exist several months ago.

Welcome to the AI security era. Hope you're ready for overtime.

For organizations serious about securing their AI deployments, Metomic provides comprehensive data security solutions that work alongside Purview to deliver real-time monitoring, automated sensitive data discovery, and proactive protection against AI-related data exposure risks. Metomic helps security teams gain the visibility and control needed to safely deploy AI tools without compromising data security or compliance.