Blog
August 20, 2025

How Are AI Agents Exposing Your Organisation's Most Sensitive Data Through Inherited Permissions?

AI agents inherit employees' existing file permissions and automatically scan all accessible data to answer questions, exposing sensitive information that employees didn't realise they could access, turning forgotten HR documents, executive communications, and confidential client data into active security vulnerabilities.

Download
Download

TL;DR: The AI Agent Data Exposure Crisis

AI agents are creating unprecedented data exposure risks by inheriting existing permissions and surfacing sensitive information employees didn't realise they could access. Recent research shows that breaches involving unauthorised AI tools cost organisations an average of $4.63 million, nearly 16% more than the global average of $4.44 million (IBM, 2025). Meanwhile, 38% of employees share confidential data with AI platforms without approval (Cloud Security Alliance, 2025), and 97% of organisations that experienced AI-related breaches lacked proper AI access controls (IBM, 2025). These agents don't create new permissions, they simply expose what already exists, turning forgotten file shares and over-provisioned access into active security vulnerabilities.

What Makes AI Agent Data Exposure Different from Traditional Security Threats?

The Core Problem: Permission Inheritance

When an employee creates or uses an AI agent (like a SharePoint agent or Google Gemini), the AI doesn't get its own separate set of permissions. Instead, it automatically inherits the exact same access rights that the employee already has across all systems. Think of it like giving someone your house keys - they can now access every room you can access.

Traditional Human Behaviour vs. AI Behaviour:

Humans naturally self-limit: Even though an employee might technically have access to thousands of files, they typically only look at what's relevant to their current task. A marketing person with access to an HR folder usually won't browse through salary documents.

AI agents don't self-limit: When you ask an AI agent a question, it scans ALL accessible content to provide the best answer. It doesn't understand context boundaries or organisational hierarchy - it just sees "accessible data."

ā€Example:

The Setup:

  • Sarah from Marketing creates a SharePoint agent to help with campaign research
  • Two years ago, Sarah was on a cross-functional project that gave her access to an HR SharePoint site
  • IT never revoked that access when the project ended
  • Sarah forgot she even has this access

The Exposure:

  • A colleague asks Sarah's AI agent: "What are the main employee retention challenges?"
  • The AI agent scans ALL of Sarah's accessible content
  • It finds detailed HR reports, salary analysis, and exit interview summaries
  • The agent provides a comprehensive answer citing specific confidential data
  • Sarah and her colleague now have access to sensitive HR information they were never meant to see

Why This is Different from Traditional Data Breaches:

Traditional breaches: Hackers break into systems they shouldn't have access to AI agent exposure: No rules are broken - the AI is using legitimate permissions, but exposing data in unintended ways

Every employee has access to 11 million files on average, with 17% of all sensitive files accessible to all employees (Varonis, 2024). AI agents make this massive over-access visible and actionable for the first time.

Why Are Organisations Struggling to Secure AI Agent Data Access?

The rapid adoption of AI agents has outpaced security controls. From 2023 to 2024, the adoption of generative AI applications by enterprise employees grew from 74% to 96% (IBM, 2025), but governance hasn't kept pace.

Why Organisations Are Blindsided:

Forgotten Permissions: Employees accumulate access over years through:

  • Project collaborations
  • Role changes
  • Department transfers
  • System migrations
  • Temporary access that was never revoked

Permission Sprawl: Modern organisations use multiple systems (SharePoint, Google Drive, Teams, etc.) where permissions can be set at file, folder, site, and organisational levels.

Lack of Visibility: IT teams often don't have clear insight into who has access to what across all these systems.

Organisations saw an average of 66 GenAI apps, with 10% classified as high risk (Palo Alto Networks, 2025). Many employees deploy AI agents without IT knowledge, creating blind spots in data governance. Additionally, 63% of breached organisations either lacked AI governance policies or were still developing them (IBM, 2025).

How Do Microsoft 365 AI Agents Create Unintended Data Exposure?

Microsoft's SharePoint agents and Copilot present unique risks because they operate within the existing Microsoft 365 permission framework. SharePoint agents access your organisation's data the same way Copilot in other Microsoft 365 apps does, responding to users based on their access permissions to the data (Microsoft Learn, 2025).

Scenario 1: The HR Document Leak

A sales manager creates a SharePoint agent to help with customer research. Unknown to them, they have lingering access to an HR site from a previous cross-functional project. When a colleague asks the agent about "employee retention strategies," it surfaces confidential HR analysis including salary benchmarks and exit interview summaries.

Scenario 2: The Executive Communication Exposure

An administrative assistant builds an agent for meeting prep. Due to calendar delegation permissions, they have access to executive emails. The agent inadvertently includes details from confidential merger discussions when asked about "upcoming strategic initiatives."

Scenario 3: The Financial Data Incident

A marketing coordinator creates a content agent scoped to their department's SharePoint site. However, a misconfigured folder share gives them read access to financial planning documents. When colleagues ask about "company growth projections," the agent provides specific revenue targets from confidential board presentations.

Microsoft's permission model means Microsoft 365 Copilot only surfaces organisational data to which individual users have at least view permissions (Microsoft Learn, 2025), but many organisations have never audited these permissions comprehensively.

What Are the Specific Risks with Google Workspace AI Agents?

Google's Gemini integration into Workspace creates similar but distinct exposure risks. Gemini only retrieves relevant content in Workspace that the user has access to in order to contextualise the prompt and ground responses (Google Support, 2025).

Scenario 1: The Drive Discovery Incident

An engineering manager uses Gemini to summarise project documentation. Due to inherited folder permissions from a previous role, they have access to legal documents. When discussing "project risks," Gemini references ongoing litigation details that were meant to be restricted to the legal team.

Scenario 2: The Client Information Breach

A junior consultant builds a research agent using Gemini for Google Workspace. Through shared drives from multiple projects, they unknowingly have access to confidential client data from other engagements. The agent surfaces competitive intelligence when asked about "industry trends," potentially violating client confidentiality agreements.

Scenario 3: The Executive Strategy Exposure

A communications specialist creates a Gemini agent to help with internal newsletters. Due to organisation-wide calendar permissions, they can access executive meeting notes stored in shared drives. The agent includes sensitive strategic decisions in what was intended to be a routine company update.

If your data isn't appropriately classified and you can't set the permissions properly, how will Google know which document shouldn't be shared? This question highlights the fundamental challenge: AI agents can only respect the permissions you've configured.

How Much Are AI Agent Data Exposures Costing Organisations?

The Business Impact:

This isn't just a theoretical risk. The financial impact extends beyond immediate breach costs:

Direct Costs: Breaches involving employees' unauthorised use of AI tools cost organisations an average of $4.63 million (IBM, 2025), with shadow AI adding $670,000 to breach costs (VentureBeat, 2025).

Regulatory Fines: GDPR fines totalled 2.1 billion euros in 2023 (Varonis, 2024). AI agents that inadvertently process EU personal data without proper consent can trigger significant penalties.

Reputation Damage: Lost business and reputation damage accounts for an average of USD 1.47 million and the majority of the increase in the average cost of a breach in 2024 (IBM, 2025).

Competitive Damage: Trade secrets and strategic plans exposed to unauthorised employees.

Legal Liability: Confidential client data accessed by people outside the client team.

Compliance Violations: HIPAA, SOX violations when personal/regulated data is exposed to unauthorised personnel.

What Immediate Steps Should CISOs Take to Secure AI Agent Data Access?

1. Conduct Comprehensive Permission Audits

Before deploying any AI agents, audit existing permissions across your Microsoft 365 or Google Workspace environment. Research shows that 15% of companies found 1,000,000+ files open to every employee (Varonis, 2024), indicating widespread over-provisioning.

2. Implement Data Classification and Labelling

Use sensitivity labels and data loss prevention (DLP) policies to restrict AI agent access. You can prevent selected files from being used by agents by using sensitivity labels along with Microsoft Purview Data Loss Prevention (DLP) (Microsoft Learn, 2025).

3. Deploy AI-Specific Monitoring

Traditional security tools miss AI-specific risks. GenAI-related DLP incidents increased more than 2.5X, now comprising 14% of all DLP incidents (Palo Alto Networks, 2025). Implement monitoring that specifically tracks AI interactions with sensitive data.

4. Establish AI Governance Policies

Create an AI Acceptable Use Policy and classify AI tools into categories: Approved, Limited-Use, and Prohibited (Cloud Security Alliance, 2025). Specify exactly what types of data can and cannot be fed into AI tools.

5. Use Restricted Access Controls

For Microsoft environments, restricted access control policy restricts all access to the site to only the group of users specified in the policy (Microsoft Learn, 2025). For Google, implement similar controls through administrative policies (Google Workspace Blog, 2025).

How Can Organisations Balance AI Innovation with Data Security?

The goal isn't to eliminate AI agents but to deploy them securely. Organisations successfully managing this balance focus on:

Principle of Least Privilege: Regularly review and reduce excessive permissions before AI deployment.

Sandbox Environments: Create AI sandboxes where employees can test AI tools in a controlled environment (Cloud Security Alliance, 2025).

Continuous Monitoring: Deploy scoped API keys, least-privilege enforcement, and identity-bound permissions across agent-tool interactions.

Employee Education: Teach employees how AI works, including risks, responsible use and best practices (Cloud Security Alliance, 2025).

What's the Long-term Outlook for AI Agent Security?

The threat landscape continues evolving. While recent research on LLM Applications shows Prompt Injections, Sensitive Information Disclosure, and LLM Supply Chain Vulnerabilities as top concerns, Agentic AI faces distinct challenges including Memory Poisoning, Tool Misuse and Privilege Compromise (Lasso Security, 2025).

AI agents will likely play an increasing role in both attack and defence scenarios as the technology continues to mature and become more widely adopted across organisations.

Organisations that proactively address AI agent data governance now will be better positioned to leverage AI's benefits while maintaining security. Those that wait risk becoming the next cautionary tale about inherited permissions and uncontrolled data access.

The Solution Approach:

The fix requires treating AI deployment as a data governance project, not just a technology rollout:

  1. Permission auditing: Discover who actually has access to what
  2. Access cleanup: Remove unnecessary permissions accumulated over time
  3. Principle of least privilege: Ensure people only have access to what they need for their current role
  4. Ongoing monitoring: Track how AI agents are accessing and surfacing data

The fundamental challenge remains: AI agents don't create new security problems, they make existing permission problems visible and exploitable at scale. Success requires treating AI deployment as a data governance initiative, not just a technology rollout.

ā€

TL;DR: The AI Agent Data Exposure Crisis

AI agents are creating unprecedented data exposure risks by inheriting existing permissions and surfacing sensitive information employees didn't realise they could access. Recent research shows that breaches involving unauthorised AI tools cost organisations an average of $4.63 million, nearly 16% more than the global average of $4.44 million (IBM, 2025). Meanwhile, 38% of employees share confidential data with AI platforms without approval (Cloud Security Alliance, 2025), and 97% of organisations that experienced AI-related breaches lacked proper AI access controls (IBM, 2025). These agents don't create new permissions, they simply expose what already exists, turning forgotten file shares and over-provisioned access into active security vulnerabilities.

What Makes AI Agent Data Exposure Different from Traditional Security Threats?

The Core Problem: Permission Inheritance

When an employee creates or uses an AI agent (like a SharePoint agent or Google Gemini), the AI doesn't get its own separate set of permissions. Instead, it automatically inherits the exact same access rights that the employee already has across all systems. Think of it like giving someone your house keys - they can now access every room you can access.

Traditional Human Behaviour vs. AI Behaviour:

Humans naturally self-limit: Even though an employee might technically have access to thousands of files, they typically only look at what's relevant to their current task. A marketing person with access to an HR folder usually won't browse through salary documents.

AI agents don't self-limit: When you ask an AI agent a question, it scans ALL accessible content to provide the best answer. It doesn't understand context boundaries or organisational hierarchy - it just sees "accessible data."

ā€Example:

The Setup:

  • Sarah from Marketing creates a SharePoint agent to help with campaign research
  • Two years ago, Sarah was on a cross-functional project that gave her access to an HR SharePoint site
  • IT never revoked that access when the project ended
  • Sarah forgot she even has this access

The Exposure:

  • A colleague asks Sarah's AI agent: "What are the main employee retention challenges?"
  • The AI agent scans ALL of Sarah's accessible content
  • It finds detailed HR reports, salary analysis, and exit interview summaries
  • The agent provides a comprehensive answer citing specific confidential data
  • Sarah and her colleague now have access to sensitive HR information they were never meant to see

Why This is Different from Traditional Data Breaches:

Traditional breaches: Hackers break into systems they shouldn't have access to AI agent exposure: No rules are broken - the AI is using legitimate permissions, but exposing data in unintended ways

Every employee has access to 11 million files on average, with 17% of all sensitive files accessible to all employees (Varonis, 2024). AI agents make this massive over-access visible and actionable for the first time.

Why Are Organisations Struggling to Secure AI Agent Data Access?

The rapid adoption of AI agents has outpaced security controls. From 2023 to 2024, the adoption of generative AI applications by enterprise employees grew from 74% to 96% (IBM, 2025), but governance hasn't kept pace.

Why Organisations Are Blindsided:

Forgotten Permissions: Employees accumulate access over years through:

  • Project collaborations
  • Role changes
  • Department transfers
  • System migrations
  • Temporary access that was never revoked

Permission Sprawl: Modern organisations use multiple systems (SharePoint, Google Drive, Teams, etc.) where permissions can be set at file, folder, site, and organisational levels.

Lack of Visibility: IT teams often don't have clear insight into who has access to what across all these systems.

Organisations saw an average of 66 GenAI apps, with 10% classified as high risk (Palo Alto Networks, 2025). Many employees deploy AI agents without IT knowledge, creating blind spots in data governance. Additionally, 63% of breached organisations either lacked AI governance policies or were still developing them (IBM, 2025).

How Do Microsoft 365 AI Agents Create Unintended Data Exposure?

Microsoft's SharePoint agents and Copilot present unique risks because they operate within the existing Microsoft 365 permission framework. SharePoint agents access your organisation's data the same way Copilot in other Microsoft 365 apps does, responding to users based on their access permissions to the data (Microsoft Learn, 2025).

Scenario 1: The HR Document Leak

A sales manager creates a SharePoint agent to help with customer research. Unknown to them, they have lingering access to an HR site from a previous cross-functional project. When a colleague asks the agent about "employee retention strategies," it surfaces confidential HR analysis including salary benchmarks and exit interview summaries.

Scenario 2: The Executive Communication Exposure

An administrative assistant builds an agent for meeting prep. Due to calendar delegation permissions, they have access to executive emails. The agent inadvertently includes details from confidential merger discussions when asked about "upcoming strategic initiatives."

Scenario 3: The Financial Data Incident

A marketing coordinator creates a content agent scoped to their department's SharePoint site. However, a misconfigured folder share gives them read access to financial planning documents. When colleagues ask about "company growth projections," the agent provides specific revenue targets from confidential board presentations.

Microsoft's permission model means Microsoft 365 Copilot only surfaces organisational data to which individual users have at least view permissions (Microsoft Learn, 2025), but many organisations have never audited these permissions comprehensively.

What Are the Specific Risks with Google Workspace AI Agents?

Google's Gemini integration into Workspace creates similar but distinct exposure risks. Gemini only retrieves relevant content in Workspace that the user has access to in order to contextualise the prompt and ground responses (Google Support, 2025).

Scenario 1: The Drive Discovery Incident

An engineering manager uses Gemini to summarise project documentation. Due to inherited folder permissions from a previous role, they have access to legal documents. When discussing "project risks," Gemini references ongoing litigation details that were meant to be restricted to the legal team.

Scenario 2: The Client Information Breach

A junior consultant builds a research agent using Gemini for Google Workspace. Through shared drives from multiple projects, they unknowingly have access to confidential client data from other engagements. The agent surfaces competitive intelligence when asked about "industry trends," potentially violating client confidentiality agreements.

Scenario 3: The Executive Strategy Exposure

A communications specialist creates a Gemini agent to help with internal newsletters. Due to organisation-wide calendar permissions, they can access executive meeting notes stored in shared drives. The agent includes sensitive strategic decisions in what was intended to be a routine company update.

If your data isn't appropriately classified and you can't set the permissions properly, how will Google know which document shouldn't be shared? This question highlights the fundamental challenge: AI agents can only respect the permissions you've configured.

How Much Are AI Agent Data Exposures Costing Organisations?

The Business Impact:

This isn't just a theoretical risk. The financial impact extends beyond immediate breach costs:

Direct Costs: Breaches involving employees' unauthorised use of AI tools cost organisations an average of $4.63 million (IBM, 2025), with shadow AI adding $670,000 to breach costs (VentureBeat, 2025).

Regulatory Fines: GDPR fines totalled 2.1 billion euros in 2023 (Varonis, 2024). AI agents that inadvertently process EU personal data without proper consent can trigger significant penalties.

Reputation Damage: Lost business and reputation damage accounts for an average of USD 1.47 million and the majority of the increase in the average cost of a breach in 2024 (IBM, 2025).

Competitive Damage: Trade secrets and strategic plans exposed to unauthorised employees.

Legal Liability: Confidential client data accessed by people outside the client team.

Compliance Violations: HIPAA, SOX violations when personal/regulated data is exposed to unauthorised personnel.

What Immediate Steps Should CISOs Take to Secure AI Agent Data Access?

1. Conduct Comprehensive Permission Audits

Before deploying any AI agents, audit existing permissions across your Microsoft 365 or Google Workspace environment. Research shows that 15% of companies found 1,000,000+ files open to every employee (Varonis, 2024), indicating widespread over-provisioning.

2. Implement Data Classification and Labelling

Use sensitivity labels and data loss prevention (DLP) policies to restrict AI agent access. You can prevent selected files from being used by agents by using sensitivity labels along with Microsoft Purview Data Loss Prevention (DLP) (Microsoft Learn, 2025).

3. Deploy AI-Specific Monitoring

Traditional security tools miss AI-specific risks. GenAI-related DLP incidents increased more than 2.5X, now comprising 14% of all DLP incidents (Palo Alto Networks, 2025). Implement monitoring that specifically tracks AI interactions with sensitive data.

4. Establish AI Governance Policies

Create an AI Acceptable Use Policy and classify AI tools into categories: Approved, Limited-Use, and Prohibited (Cloud Security Alliance, 2025). Specify exactly what types of data can and cannot be fed into AI tools.

5. Use Restricted Access Controls

For Microsoft environments, restricted access control policy restricts all access to the site to only the group of users specified in the policy (Microsoft Learn, 2025). For Google, implement similar controls through administrative policies (Google Workspace Blog, 2025).

How Can Organisations Balance AI Innovation with Data Security?

The goal isn't to eliminate AI agents but to deploy them securely. Organisations successfully managing this balance focus on:

Principle of Least Privilege: Regularly review and reduce excessive permissions before AI deployment.

Sandbox Environments: Create AI sandboxes where employees can test AI tools in a controlled environment (Cloud Security Alliance, 2025).

Continuous Monitoring: Deploy scoped API keys, least-privilege enforcement, and identity-bound permissions across agent-tool interactions.

Employee Education: Teach employees how AI works, including risks, responsible use and best practices (Cloud Security Alliance, 2025).

What's the Long-term Outlook for AI Agent Security?

The threat landscape continues evolving. While recent research on LLM Applications shows Prompt Injections, Sensitive Information Disclosure, and LLM Supply Chain Vulnerabilities as top concerns, Agentic AI faces distinct challenges including Memory Poisoning, Tool Misuse and Privilege Compromise (Lasso Security, 2025).

AI agents will likely play an increasing role in both attack and defence scenarios as the technology continues to mature and become more widely adopted across organisations.

Organisations that proactively address AI agent data governance now will be better positioned to leverage AI's benefits while maintaining security. Those that wait risk becoming the next cautionary tale about inherited permissions and uncontrolled data access.

The Solution Approach:

The fix requires treating AI deployment as a data governance project, not just a technology rollout:

  1. Permission auditing: Discover who actually has access to what
  2. Access cleanup: Remove unnecessary permissions accumulated over time
  3. Principle of least privilege: Ensure people only have access to what they need for their current role
  4. Ongoing monitoring: Track how AI agents are accessing and surfacing data

The fundamental challenge remains: AI agents don't create new security problems, they make existing permission problems visible and exploitable at scale. Success requires treating AI deployment as a data governance initiative, not just a technology rollout.

ā€

TL;DR: The AI Agent Data Exposure Crisis

AI agents are creating unprecedented data exposure risks by inheriting existing permissions and surfacing sensitive information employees didn't realise they could access. Recent research shows that breaches involving unauthorised AI tools cost organisations an average of $4.63 million, nearly 16% more than the global average of $4.44 million (IBM, 2025). Meanwhile, 38% of employees share confidential data with AI platforms without approval (Cloud Security Alliance, 2025), and 97% of organisations that experienced AI-related breaches lacked proper AI access controls (IBM, 2025). These agents don't create new permissions, they simply expose what already exists, turning forgotten file shares and over-provisioned access into active security vulnerabilities.

What Makes AI Agent Data Exposure Different from Traditional Security Threats?

The Core Problem: Permission Inheritance

When an employee creates or uses an AI agent (like a SharePoint agent or Google Gemini), the AI doesn't get its own separate set of permissions. Instead, it automatically inherits the exact same access rights that the employee already has across all systems. Think of it like giving someone your house keys - they can now access every room you can access.

Traditional Human Behaviour vs. AI Behaviour:

Humans naturally self-limit: Even though an employee might technically have access to thousands of files, they typically only look at what's relevant to their current task. A marketing person with access to an HR folder usually won't browse through salary documents.

AI agents don't self-limit: When you ask an AI agent a question, it scans ALL accessible content to provide the best answer. It doesn't understand context boundaries or organisational hierarchy - it just sees "accessible data."

ā€Example:

The Setup:

  • Sarah from Marketing creates a SharePoint agent to help with campaign research
  • Two years ago, Sarah was on a cross-functional project that gave her access to an HR SharePoint site
  • IT never revoked that access when the project ended
  • Sarah forgot she even has this access

The Exposure:

  • A colleague asks Sarah's AI agent: "What are the main employee retention challenges?"
  • The AI agent scans ALL of Sarah's accessible content
  • It finds detailed HR reports, salary analysis, and exit interview summaries
  • The agent provides a comprehensive answer citing specific confidential data
  • Sarah and her colleague now have access to sensitive HR information they were never meant to see

Why This is Different from Traditional Data Breaches:

Traditional breaches: Hackers break into systems they shouldn't have access to AI agent exposure: No rules are broken - the AI is using legitimate permissions, but exposing data in unintended ways

Every employee has access to 11 million files on average, with 17% of all sensitive files accessible to all employees (Varonis, 2024). AI agents make this massive over-access visible and actionable for the first time.

Why Are Organisations Struggling to Secure AI Agent Data Access?

The rapid adoption of AI agents has outpaced security controls. From 2023 to 2024, the adoption of generative AI applications by enterprise employees grew from 74% to 96% (IBM, 2025), but governance hasn't kept pace.

Why Organisations Are Blindsided:

Forgotten Permissions: Employees accumulate access over years through:

  • Project collaborations
  • Role changes
  • Department transfers
  • System migrations
  • Temporary access that was never revoked

Permission Sprawl: Modern organisations use multiple systems (SharePoint, Google Drive, Teams, etc.) where permissions can be set at file, folder, site, and organisational levels.

Lack of Visibility: IT teams often don't have clear insight into who has access to what across all these systems.

Organisations saw an average of 66 GenAI apps, with 10% classified as high risk (Palo Alto Networks, 2025). Many employees deploy AI agents without IT knowledge, creating blind spots in data governance. Additionally, 63% of breached organisations either lacked AI governance policies or were still developing them (IBM, 2025).

How Do Microsoft 365 AI Agents Create Unintended Data Exposure?

Microsoft's SharePoint agents and Copilot present unique risks because they operate within the existing Microsoft 365 permission framework. SharePoint agents access your organisation's data the same way Copilot in other Microsoft 365 apps does, responding to users based on their access permissions to the data (Microsoft Learn, 2025).

Scenario 1: The HR Document Leak

A sales manager creates a SharePoint agent to help with customer research. Unknown to them, they have lingering access to an HR site from a previous cross-functional project. When a colleague asks the agent about "employee retention strategies," it surfaces confidential HR analysis including salary benchmarks and exit interview summaries.

Scenario 2: The Executive Communication Exposure

An administrative assistant builds an agent for meeting prep. Due to calendar delegation permissions, they have access to executive emails. The agent inadvertently includes details from confidential merger discussions when asked about "upcoming strategic initiatives."

Scenario 3: The Financial Data Incident

A marketing coordinator creates a content agent scoped to their department's SharePoint site. However, a misconfigured folder share gives them read access to financial planning documents. When colleagues ask about "company growth projections," the agent provides specific revenue targets from confidential board presentations.

Microsoft's permission model means Microsoft 365 Copilot only surfaces organisational data to which individual users have at least view permissions (Microsoft Learn, 2025), but many organisations have never audited these permissions comprehensively.

What Are the Specific Risks with Google Workspace AI Agents?

Google's Gemini integration into Workspace creates similar but distinct exposure risks. Gemini only retrieves relevant content in Workspace that the user has access to in order to contextualise the prompt and ground responses (Google Support, 2025).

Scenario 1: The Drive Discovery Incident

An engineering manager uses Gemini to summarise project documentation. Due to inherited folder permissions from a previous role, they have access to legal documents. When discussing "project risks," Gemini references ongoing litigation details that were meant to be restricted to the legal team.

Scenario 2: The Client Information Breach

A junior consultant builds a research agent using Gemini for Google Workspace. Through shared drives from multiple projects, they unknowingly have access to confidential client data from other engagements. The agent surfaces competitive intelligence when asked about "industry trends," potentially violating client confidentiality agreements.

Scenario 3: The Executive Strategy Exposure

A communications specialist creates a Gemini agent to help with internal newsletters. Due to organisation-wide calendar permissions, they can access executive meeting notes stored in shared drives. The agent includes sensitive strategic decisions in what was intended to be a routine company update.

If your data isn't appropriately classified and you can't set the permissions properly, how will Google know which document shouldn't be shared? This question highlights the fundamental challenge: AI agents can only respect the permissions you've configured.

How Much Are AI Agent Data Exposures Costing Organisations?

The Business Impact:

This isn't just a theoretical risk. The financial impact extends beyond immediate breach costs:

Direct Costs: Breaches involving employees' unauthorised use of AI tools cost organisations an average of $4.63 million (IBM, 2025), with shadow AI adding $670,000 to breach costs (VentureBeat, 2025).

Regulatory Fines: GDPR fines totalled 2.1 billion euros in 2023 (Varonis, 2024). AI agents that inadvertently process EU personal data without proper consent can trigger significant penalties.

Reputation Damage: Lost business and reputation damage accounts for an average of USD 1.47 million and the majority of the increase in the average cost of a breach in 2024 (IBM, 2025).

Competitive Damage: Trade secrets and strategic plans exposed to unauthorised employees.

Legal Liability: Confidential client data accessed by people outside the client team.

Compliance Violations: HIPAA, SOX violations when personal/regulated data is exposed to unauthorised personnel.

What Immediate Steps Should CISOs Take to Secure AI Agent Data Access?

1. Conduct Comprehensive Permission Audits

Before deploying any AI agents, audit existing permissions across your Microsoft 365 or Google Workspace environment. Research shows that 15% of companies found 1,000,000+ files open to every employee (Varonis, 2024), indicating widespread over-provisioning.

2. Implement Data Classification and Labelling

Use sensitivity labels and data loss prevention (DLP) policies to restrict AI agent access. You can prevent selected files from being used by agents by using sensitivity labels along with Microsoft Purview Data Loss Prevention (DLP) (Microsoft Learn, 2025).

3. Deploy AI-Specific Monitoring

Traditional security tools miss AI-specific risks. GenAI-related DLP incidents increased more than 2.5X, now comprising 14% of all DLP incidents (Palo Alto Networks, 2025). Implement monitoring that specifically tracks AI interactions with sensitive data.

4. Establish AI Governance Policies

Create an AI Acceptable Use Policy and classify AI tools into categories: Approved, Limited-Use, and Prohibited (Cloud Security Alliance, 2025). Specify exactly what types of data can and cannot be fed into AI tools.

5. Use Restricted Access Controls

For Microsoft environments, restricted access control policy restricts all access to the site to only the group of users specified in the policy (Microsoft Learn, 2025). For Google, implement similar controls through administrative policies (Google Workspace Blog, 2025).

How Can Organisations Balance AI Innovation with Data Security?

The goal isn't to eliminate AI agents but to deploy them securely. Organisations successfully managing this balance focus on:

Principle of Least Privilege: Regularly review and reduce excessive permissions before AI deployment.

Sandbox Environments: Create AI sandboxes where employees can test AI tools in a controlled environment (Cloud Security Alliance, 2025).

Continuous Monitoring: Deploy scoped API keys, least-privilege enforcement, and identity-bound permissions across agent-tool interactions.

Employee Education: Teach employees how AI works, including risks, responsible use and best practices (Cloud Security Alliance, 2025).

What's the Long-term Outlook for AI Agent Security?

The threat landscape continues evolving. While recent research on LLM Applications shows Prompt Injections, Sensitive Information Disclosure, and LLM Supply Chain Vulnerabilities as top concerns, Agentic AI faces distinct challenges including Memory Poisoning, Tool Misuse and Privilege Compromise (Lasso Security, 2025).

AI agents will likely play an increasing role in both attack and defence scenarios as the technology continues to mature and become more widely adopted across organisations.

Organisations that proactively address AI agent data governance now will be better positioned to leverage AI's benefits while maintaining security. Those that wait risk becoming the next cautionary tale about inherited permissions and uncontrolled data access.

The Solution Approach:

The fix requires treating AI deployment as a data governance project, not just a technology rollout:

  1. Permission auditing: Discover who actually has access to what
  2. Access cleanup: Remove unnecessary permissions accumulated over time
  3. Principle of least privilege: Ensure people only have access to what they need for their current role
  4. Ongoing monitoring: Track how AI agents are accessing and surfacing data

The fundamental challenge remains: AI agents don't create new security problems, they make existing permission problems visible and exploitable at scale. Success requires treating AI deployment as a data governance initiative, not just a technology rollout.

ā€