Blog
December 24, 2025

2025 The Calm Before the AI Storm: Why CISOs Must Prepare for an Unprecedented Spike in Data Risk

Here's why AI automation will reshape data security in 2026, and how to shift to data-first defense.

Download
Download
Blog
December 24, 2025

2025 The Calm Before the AI Storm: Why CISOs Must Prepare for an Unprecedented Spike in Data Risk

Here's why AI automation will reshape data security in 2026, and how to shift to data-first defense.

Download
Download

In a recent conversation with Boaz Valkin from Falkin, our CTO Ben Enckevort made a prediction: "We are going to look back at 2025 as the quiet period."

Not because 2025 was free from cyber threats, but because what's coming in 2026 will make this year's challenges seem manageable by comparison. Beneath the surface of day-to-day security operations, adversaries are upgrading their playbooks in a fundamental way. AI-driven automation is lowering the cost of attacks to near zero, expanding the reach of social engineering exponentially, and turning every unprotected SaaS dataset into a future liability.

According to Ben, 2026 will demand a decisive shift from reactive security to data-first, AI-ready governance. Here's why.

You can listen to the full conversation here.

2025 Seemed Stable but it Wasn’t

Many security teams have maintained steady operations throughout 2025. Our own research showed 90% of CISOs believed they were meeting key security objectives.

But as Ben explained to Falkin, this sense of control is deceptive. While traditional security metrics may look acceptable, a fundamental shift is already underway.

"The ability to automate things, anything, is increasing drastically. The cost of automating things is dropping to almost zero," Ben noted. "And whenever you take anything that had a cost and you reduce that cost to zero or near zero, you produce emergent new behaviours that nobody would have bothered doing before."

Attackers are improving their techniques while rewriting the economics of cybercrime itself. What wasn't worth an attacker's time in 2022 can now be automated and executed at scale, thousands of times per hour.

Falkin's work on behavioural fraud detection reveals how this plays out: the threat is shifting earlier in the attack chain, before transactions, before authentication failures — into the grey space where data, AI agents, and user behaviour intersect.

AI Has Quietly Changed the Cost of an Attack

Historically, launching a sophisticated attack required human research, human iteration, and was limited by human error and human time.

Now, AI can execute each of those steps thousands of times an hour, at zero marginal cost.

As Ben put it: "To write a virus, you used to have a programmer and they had to do research and understand the attack vectors and then write that thing. And right now, you can have a thousand AIs try a million things in an hour."

This shift has three critical implications:

1. “Low-value” attacks become high-volume threats.

What wasn't economically viable for attackers in previous years is now fully automated. Every misconfigured folder, every over-shared document, every forgotten access permission becomes a viable target.

2. SaaS tools (not networks) are the new target.

If an AI agent can read Slack threads, Google Drive files, or Jira tickets, then any sensitive data inside those tools becomes an attack vector. Traditional perimeter security doesn't address this risk.

3. The real attack surface is your data access graph.

It’s no longer the systems that are the risk surfaces. It is our data. And data stored in collaborative SaaS tools is the easiest to access, easiest to repurpose, easiest to manipulate.

This aligns directly with Metomic’s 2024 CISO survey, which found that 15–40% of documents contain sensitive information and in some cases up to 95% are misconfigured.

2025 Exposed a New Problem: AI Doesn’t Know What’s Data and What’s Code

One of the most important insights from the conversation is the structural flaw in today’s AI systems:

LLMs can’t reliably distinguish between information and instructions.

Ben drew a parallel to early web security: "Back in the kind of early 2000s, late 2000s, many companies in the industry were struggling with code injection attacks, cross-site scripting. We're seeing exactly the same thing. There is no differentiation between the code and the text."

In practical terms, this means:

  • A single Google Doc with embedded instructions or manipulative phrasing could influence an AI agent reading it
  • Slack messages become potential trigger instructions
  • Files with sensitive context can mislead agents executing routine tasks
  • Personal AI agents reading WhatsApp messages create new social engineering vectors

Until AI architectures evolve to solve this problem, data governance becomes the critical defence layer, not just model governance or traditional security controls.

Why CISOs Need a Data-First Security Strategy in 2026

The cybersecurity industry has spent the last decade refining device, endpoint, and network controls. But the next wave of AI-driven attacks won't start there.

They will start with:

  • exposed tokens in Slack,
  • over-shared folders in Drive,
  • mislabelled financial models,
  • Jira tickets containing sensitive customer data,
  • and historic content that employees didn’t even realise they had access to.

AI agents surface, and act on, all of it.

For CISOs, 2026 will require a pivot from traditional posture management to AI-ready SaaS data governance:

1. Full visibility across Slack, Drive, Jira, Confluence

If you can’t see the data, you can’t protect it.

2. Automatic remediation before AI agents connect to your environment

Human behaviour alone cannot scale.

3. Google Drive labelling and data classification that is enforced, not optional

Metomic’s AI-powered classification fills the gap between policy and reality.

4. User-involved security that doesn’t rely on perfect memory

Slack prompts, quick-fix buttons, automated workflows — they become essential.

5. Continuous monitoring of how data moves through the SaaS ecosystem

Especially as more teams deploy Gemini, Copilot, and internal agent frameworks.

6 Questions Every CISO Should Ask Before January 2026

1. Which SaaS systems can our AI agents access today — directly or indirectly?

Visibility must come first. Map out every connection point where an AI agent could read or act on corporate data.

2. How much sensitive data sits in collaborative tools with misconfigured access?

For most organisations, the honest answer is: "More than we think." Run an audit now, before AI agents make this a critical vulnerability.

3. Which files contain AI-triggerable instructions alongside data?

This is the new category of "shadow risk" — documents that could manipulate AI behaviour through embedded prompts or specific phrasing.

4. Can we control how data is surfaced to AI systems?

Classification and labelling become core controls, not nice-to-haves. You need the ability to tag data that should never be exposed to AI agents.

5. If an employee leaves tomorrow, what data does their workspace still expose?

Or worse — what does their AI agent still have cached? Access reviews take on new urgency when agents have persistent memory.

6. If a breach occurs, could we demonstrate governance, not just detection?

Regulators and stakeholders will want to see proactive data controls, not just incident response logs.

Why 2026 Will Reward the Safest, Not the Fastest

The threat landscape is evolving at an unprecedented pace. We're heading toward automated attacks on all fronts - targeting businesses and personal data alike, at a scale that's never been possible before.

But there's a flip side to this AI-powered transformation. The same technology driving sophisticated attacks also unlocks powerful defensive capabilities. AI can stitch together data that tells a complete picture in seconds, finding patterns and threats that would take human analysts hours or days to uncover. Instead of manually searching for the needle in the haystack, AI runs a magnet over it and pulls out every needle at once.

The critical insight is this: organisations cannot treat AI adoption as just a technical rollout.

It's a data governance transformation.

2026 won't reward the fastest AI adopters. It will reward the safest. Those who understand that before connecting AI agents to their data, they need complete visibility, proper classification, and robust controls over what information gets exposed.

How Metomic Helps CISOs Prepare for the Next Wave

Metomic gives security teams:

  • Real-time visibility into sensitive data across Slack, Drive, Jira, Confluence.
  • Automatic classification and Drive labelling, aligned to business context.
  • Workflow-driven remediation that reduces manual triage.
  • User-involved fixes that resolve issues at the source.
  • AI-readiness for Gemini, Copilot, and agent ecosystems, by preventing oversharing at scale.

This is what modern SaaS DLP and AI governance need to look like: simple, human, precise, and built around the reality of how people work.

Closing Thoughts

As you plan for 2026, the fundamental shift in cybersecurity is clear: systems are no longer the primary risk surface. Your data is.

The attack vectors that kept CISOs awake at night for the past decade (compromised endpoints, network breaches, vulnerable perimeters) are being overshadowed by a simpler reality. Every document in Google Drive, every message in Slack, every ticket in Jira represents potential exposure when AI agents have access to read, interpret, and act on that information.

Your strongest defence isn't faster threat detection or more sophisticated endpoint protection, but controlling your data before AI gets to it — whether that AI is your own productivity tool, a partner's integration, or an adversary's automated attack system.

The questions to ask are: Will your data governance will scale up first? Will you have visibility into your sensitive data? Will you have classification systems in place? Will you have automated controls that work at machine speed?

The organisations that answer yes to these questions will navigate 2026 with confidence. Those that don't will be fighting yesterday's battles with tomorrow's threats.

If you'd like guidance on building an AI-ready data governance strategy, our team is here to help.

In a recent conversation with Boaz Valkin from Falkin, our CTO Ben Enckevort made a prediction: "We are going to look back at 2025 as the quiet period."

Not because 2025 was free from cyber threats, but because what's coming in 2026 will make this year's challenges seem manageable by comparison. Beneath the surface of day-to-day security operations, adversaries are upgrading their playbooks in a fundamental way. AI-driven automation is lowering the cost of attacks to near zero, expanding the reach of social engineering exponentially, and turning every unprotected SaaS dataset into a future liability.

According to Ben, 2026 will demand a decisive shift from reactive security to data-first, AI-ready governance. Here's why.

You can listen to the full conversation here.

2025 Seemed Stable but it Wasn’t

Many security teams have maintained steady operations throughout 2025. Our own research showed 90% of CISOs believed they were meeting key security objectives.

But as Ben explained to Falkin, this sense of control is deceptive. While traditional security metrics may look acceptable, a fundamental shift is already underway.

"The ability to automate things, anything, is increasing drastically. The cost of automating things is dropping to almost zero," Ben noted. "And whenever you take anything that had a cost and you reduce that cost to zero or near zero, you produce emergent new behaviours that nobody would have bothered doing before."

Attackers are improving their techniques while rewriting the economics of cybercrime itself. What wasn't worth an attacker's time in 2022 can now be automated and executed at scale, thousands of times per hour.

Falkin's work on behavioural fraud detection reveals how this plays out: the threat is shifting earlier in the attack chain, before transactions, before authentication failures — into the grey space where data, AI agents, and user behaviour intersect.

AI Has Quietly Changed the Cost of an Attack

Historically, launching a sophisticated attack required human research, human iteration, and was limited by human error and human time.

Now, AI can execute each of those steps thousands of times an hour, at zero marginal cost.

As Ben put it: "To write a virus, you used to have a programmer and they had to do research and understand the attack vectors and then write that thing. And right now, you can have a thousand AIs try a million things in an hour."

This shift has three critical implications:

1. “Low-value” attacks become high-volume threats.

What wasn't economically viable for attackers in previous years is now fully automated. Every misconfigured folder, every over-shared document, every forgotten access permission becomes a viable target.

2. SaaS tools (not networks) are the new target.

If an AI agent can read Slack threads, Google Drive files, or Jira tickets, then any sensitive data inside those tools becomes an attack vector. Traditional perimeter security doesn't address this risk.

3. The real attack surface is your data access graph.

It’s no longer the systems that are the risk surfaces. It is our data. And data stored in collaborative SaaS tools is the easiest to access, easiest to repurpose, easiest to manipulate.

This aligns directly with Metomic’s 2024 CISO survey, which found that 15–40% of documents contain sensitive information and in some cases up to 95% are misconfigured.

2025 Exposed a New Problem: AI Doesn’t Know What’s Data and What’s Code

One of the most important insights from the conversation is the structural flaw in today’s AI systems:

LLMs can’t reliably distinguish between information and instructions.

Ben drew a parallel to early web security: "Back in the kind of early 2000s, late 2000s, many companies in the industry were struggling with code injection attacks, cross-site scripting. We're seeing exactly the same thing. There is no differentiation between the code and the text."

In practical terms, this means:

  • A single Google Doc with embedded instructions or manipulative phrasing could influence an AI agent reading it
  • Slack messages become potential trigger instructions
  • Files with sensitive context can mislead agents executing routine tasks
  • Personal AI agents reading WhatsApp messages create new social engineering vectors

Until AI architectures evolve to solve this problem, data governance becomes the critical defence layer, not just model governance or traditional security controls.

Why CISOs Need a Data-First Security Strategy in 2026

The cybersecurity industry has spent the last decade refining device, endpoint, and network controls. But the next wave of AI-driven attacks won't start there.

They will start with:

  • exposed tokens in Slack,
  • over-shared folders in Drive,
  • mislabelled financial models,
  • Jira tickets containing sensitive customer data,
  • and historic content that employees didn’t even realise they had access to.

AI agents surface, and act on, all of it.

For CISOs, 2026 will require a pivot from traditional posture management to AI-ready SaaS data governance:

1. Full visibility across Slack, Drive, Jira, Confluence

If you can’t see the data, you can’t protect it.

2. Automatic remediation before AI agents connect to your environment

Human behaviour alone cannot scale.

3. Google Drive labelling and data classification that is enforced, not optional

Metomic’s AI-powered classification fills the gap between policy and reality.

4. User-involved security that doesn’t rely on perfect memory

Slack prompts, quick-fix buttons, automated workflows — they become essential.

5. Continuous monitoring of how data moves through the SaaS ecosystem

Especially as more teams deploy Gemini, Copilot, and internal agent frameworks.

6 Questions Every CISO Should Ask Before January 2026

1. Which SaaS systems can our AI agents access today — directly or indirectly?

Visibility must come first. Map out every connection point where an AI agent could read or act on corporate data.

2. How much sensitive data sits in collaborative tools with misconfigured access?

For most organisations, the honest answer is: "More than we think." Run an audit now, before AI agents make this a critical vulnerability.

3. Which files contain AI-triggerable instructions alongside data?

This is the new category of "shadow risk" — documents that could manipulate AI behaviour through embedded prompts or specific phrasing.

4. Can we control how data is surfaced to AI systems?

Classification and labelling become core controls, not nice-to-haves. You need the ability to tag data that should never be exposed to AI agents.

5. If an employee leaves tomorrow, what data does their workspace still expose?

Or worse — what does their AI agent still have cached? Access reviews take on new urgency when agents have persistent memory.

6. If a breach occurs, could we demonstrate governance, not just detection?

Regulators and stakeholders will want to see proactive data controls, not just incident response logs.

Why 2026 Will Reward the Safest, Not the Fastest

The threat landscape is evolving at an unprecedented pace. We're heading toward automated attacks on all fronts - targeting businesses and personal data alike, at a scale that's never been possible before.

But there's a flip side to this AI-powered transformation. The same technology driving sophisticated attacks also unlocks powerful defensive capabilities. AI can stitch together data that tells a complete picture in seconds, finding patterns and threats that would take human analysts hours or days to uncover. Instead of manually searching for the needle in the haystack, AI runs a magnet over it and pulls out every needle at once.

The critical insight is this: organisations cannot treat AI adoption as just a technical rollout.

It's a data governance transformation.

2026 won't reward the fastest AI adopters. It will reward the safest. Those who understand that before connecting AI agents to their data, they need complete visibility, proper classification, and robust controls over what information gets exposed.

How Metomic Helps CISOs Prepare for the Next Wave

Metomic gives security teams:

  • Real-time visibility into sensitive data across Slack, Drive, Jira, Confluence.
  • Automatic classification and Drive labelling, aligned to business context.
  • Workflow-driven remediation that reduces manual triage.
  • User-involved fixes that resolve issues at the source.
  • AI-readiness for Gemini, Copilot, and agent ecosystems, by preventing oversharing at scale.

This is what modern SaaS DLP and AI governance need to look like: simple, human, precise, and built around the reality of how people work.

Closing Thoughts

As you plan for 2026, the fundamental shift in cybersecurity is clear: systems are no longer the primary risk surface. Your data is.

The attack vectors that kept CISOs awake at night for the past decade (compromised endpoints, network breaches, vulnerable perimeters) are being overshadowed by a simpler reality. Every document in Google Drive, every message in Slack, every ticket in Jira represents potential exposure when AI agents have access to read, interpret, and act on that information.

Your strongest defence isn't faster threat detection or more sophisticated endpoint protection, but controlling your data before AI gets to it — whether that AI is your own productivity tool, a partner's integration, or an adversary's automated attack system.

The questions to ask are: Will your data governance will scale up first? Will you have visibility into your sensitive data? Will you have classification systems in place? Will you have automated controls that work at machine speed?

The organisations that answer yes to these questions will navigate 2026 with confidence. Those that don't will be fighting yesterday's battles with tomorrow's threats.

If you'd like guidance on building an AI-ready data governance strategy, our team is here to help.

In a recent conversation with Boaz Valkin from Falkin, our CTO Ben Enckevort made a prediction: "We are going to look back at 2025 as the quiet period."

Not because 2025 was free from cyber threats, but because what's coming in 2026 will make this year's challenges seem manageable by comparison. Beneath the surface of day-to-day security operations, adversaries are upgrading their playbooks in a fundamental way. AI-driven automation is lowering the cost of attacks to near zero, expanding the reach of social engineering exponentially, and turning every unprotected SaaS dataset into a future liability.

According to Ben, 2026 will demand a decisive shift from reactive security to data-first, AI-ready governance. Here's why.

You can listen to the full conversation here.

2025 Seemed Stable but it Wasn’t

Many security teams have maintained steady operations throughout 2025. Our own research showed 90% of CISOs believed they were meeting key security objectives.

But as Ben explained to Falkin, this sense of control is deceptive. While traditional security metrics may look acceptable, a fundamental shift is already underway.

"The ability to automate things, anything, is increasing drastically. The cost of automating things is dropping to almost zero," Ben noted. "And whenever you take anything that had a cost and you reduce that cost to zero or near zero, you produce emergent new behaviours that nobody would have bothered doing before."

Attackers are improving their techniques while rewriting the economics of cybercrime itself. What wasn't worth an attacker's time in 2022 can now be automated and executed at scale, thousands of times per hour.

Falkin's work on behavioural fraud detection reveals how this plays out: the threat is shifting earlier in the attack chain, before transactions, before authentication failures — into the grey space where data, AI agents, and user behaviour intersect.

AI Has Quietly Changed the Cost of an Attack

Historically, launching a sophisticated attack required human research, human iteration, and was limited by human error and human time.

Now, AI can execute each of those steps thousands of times an hour, at zero marginal cost.

As Ben put it: "To write a virus, you used to have a programmer and they had to do research and understand the attack vectors and then write that thing. And right now, you can have a thousand AIs try a million things in an hour."

This shift has three critical implications:

1. “Low-value” attacks become high-volume threats.

What wasn't economically viable for attackers in previous years is now fully automated. Every misconfigured folder, every over-shared document, every forgotten access permission becomes a viable target.

2. SaaS tools (not networks) are the new target.

If an AI agent can read Slack threads, Google Drive files, or Jira tickets, then any sensitive data inside those tools becomes an attack vector. Traditional perimeter security doesn't address this risk.

3. The real attack surface is your data access graph.

It’s no longer the systems that are the risk surfaces. It is our data. And data stored in collaborative SaaS tools is the easiest to access, easiest to repurpose, easiest to manipulate.

This aligns directly with Metomic’s 2024 CISO survey, which found that 15–40% of documents contain sensitive information and in some cases up to 95% are misconfigured.

2025 Exposed a New Problem: AI Doesn’t Know What’s Data and What’s Code

One of the most important insights from the conversation is the structural flaw in today’s AI systems:

LLMs can’t reliably distinguish between information and instructions.

Ben drew a parallel to early web security: "Back in the kind of early 2000s, late 2000s, many companies in the industry were struggling with code injection attacks, cross-site scripting. We're seeing exactly the same thing. There is no differentiation between the code and the text."

In practical terms, this means:

  • A single Google Doc with embedded instructions or manipulative phrasing could influence an AI agent reading it
  • Slack messages become potential trigger instructions
  • Files with sensitive context can mislead agents executing routine tasks
  • Personal AI agents reading WhatsApp messages create new social engineering vectors

Until AI architectures evolve to solve this problem, data governance becomes the critical defence layer, not just model governance or traditional security controls.

Why CISOs Need a Data-First Security Strategy in 2026

The cybersecurity industry has spent the last decade refining device, endpoint, and network controls. But the next wave of AI-driven attacks won't start there.

They will start with:

  • exposed tokens in Slack,
  • over-shared folders in Drive,
  • mislabelled financial models,
  • Jira tickets containing sensitive customer data,
  • and historic content that employees didn’t even realise they had access to.

AI agents surface, and act on, all of it.

For CISOs, 2026 will require a pivot from traditional posture management to AI-ready SaaS data governance:

1. Full visibility across Slack, Drive, Jira, Confluence

If you can’t see the data, you can’t protect it.

2. Automatic remediation before AI agents connect to your environment

Human behaviour alone cannot scale.

3. Google Drive labelling and data classification that is enforced, not optional

Metomic’s AI-powered classification fills the gap between policy and reality.

4. User-involved security that doesn’t rely on perfect memory

Slack prompts, quick-fix buttons, automated workflows — they become essential.

5. Continuous monitoring of how data moves through the SaaS ecosystem

Especially as more teams deploy Gemini, Copilot, and internal agent frameworks.

6 Questions Every CISO Should Ask Before January 2026

1. Which SaaS systems can our AI agents access today — directly or indirectly?

Visibility must come first. Map out every connection point where an AI agent could read or act on corporate data.

2. How much sensitive data sits in collaborative tools with misconfigured access?

For most organisations, the honest answer is: "More than we think." Run an audit now, before AI agents make this a critical vulnerability.

3. Which files contain AI-triggerable instructions alongside data?

This is the new category of "shadow risk" — documents that could manipulate AI behaviour through embedded prompts or specific phrasing.

4. Can we control how data is surfaced to AI systems?

Classification and labelling become core controls, not nice-to-haves. You need the ability to tag data that should never be exposed to AI agents.

5. If an employee leaves tomorrow, what data does their workspace still expose?

Or worse — what does their AI agent still have cached? Access reviews take on new urgency when agents have persistent memory.

6. If a breach occurs, could we demonstrate governance, not just detection?

Regulators and stakeholders will want to see proactive data controls, not just incident response logs.

Why 2026 Will Reward the Safest, Not the Fastest

The threat landscape is evolving at an unprecedented pace. We're heading toward automated attacks on all fronts - targeting businesses and personal data alike, at a scale that's never been possible before.

But there's a flip side to this AI-powered transformation. The same technology driving sophisticated attacks also unlocks powerful defensive capabilities. AI can stitch together data that tells a complete picture in seconds, finding patterns and threats that would take human analysts hours or days to uncover. Instead of manually searching for the needle in the haystack, AI runs a magnet over it and pulls out every needle at once.

The critical insight is this: organisations cannot treat AI adoption as just a technical rollout.

It's a data governance transformation.

2026 won't reward the fastest AI adopters. It will reward the safest. Those who understand that before connecting AI agents to their data, they need complete visibility, proper classification, and robust controls over what information gets exposed.

How Metomic Helps CISOs Prepare for the Next Wave

Metomic gives security teams:

  • Real-time visibility into sensitive data across Slack, Drive, Jira, Confluence.
  • Automatic classification and Drive labelling, aligned to business context.
  • Workflow-driven remediation that reduces manual triage.
  • User-involved fixes that resolve issues at the source.
  • AI-readiness for Gemini, Copilot, and agent ecosystems, by preventing oversharing at scale.

This is what modern SaaS DLP and AI governance need to look like: simple, human, precise, and built around the reality of how people work.

Closing Thoughts

As you plan for 2026, the fundamental shift in cybersecurity is clear: systems are no longer the primary risk surface. Your data is.

The attack vectors that kept CISOs awake at night for the past decade (compromised endpoints, network breaches, vulnerable perimeters) are being overshadowed by a simpler reality. Every document in Google Drive, every message in Slack, every ticket in Jira represents potential exposure when AI agents have access to read, interpret, and act on that information.

Your strongest defence isn't faster threat detection or more sophisticated endpoint protection, but controlling your data before AI gets to it — whether that AI is your own productivity tool, a partner's integration, or an adversary's automated attack system.

The questions to ask are: Will your data governance will scale up first? Will you have visibility into your sensitive data? Will you have classification systems in place? Will you have automated controls that work at machine speed?

The organisations that answer yes to these questions will navigate 2026 with confidence. Those that don't will be fighting yesterday's battles with tomorrow's threats.

If you'd like guidance on building an AI-ready data governance strategy, our team is here to help.