Blog
February 4, 2026

The Obedient Monkey: A framework for AI agent risk your board will remember

How to explain OpenClaw, prompt injection, and agentic AI risk without losing the room

Download
Download
Blog
February 4, 2026

The Obedient Monkey: A framework for AI agent risk your board will remember

How to explain OpenClaw, prompt injection, and agentic AI risk without losing the room

Download
Download

Your team is talking about AI agents. Maybe they're already experimenting with OpenClaw, the open-source assistant that's taken the tech world by storm this past week. Maybe they're asking for budget to explore "agentic AI." Maybe someone just sent you a breathless article about the future of autonomous assistants.

You need a mental model that demystifies AI, something you can use in your next board meeting and which makes the risks intuitive without requiring a computer science degree.

Here it is.

The Obedient Monkey

AI agents like OpenClaw (previously ClawBot) are best understood as obedient monkeys.

A monkey is very capable since it can use tools and pull levers. It can really do things in the real world. People are using OpenClaw to negotiate car prices with dealerships, make restaurant reservations by phone, book flights, clear inboxes, and manage calendars - all autonomously and without human input.

But here's the thing about your monkey: it's not very discerning.

It's trying to listen to your instructions and do its best to get them done. But it's also willing to listen to anybody else's instructions. It doesn't have a strong sense of who should be giving it orders. If someone else tells it to do something, there's a good chance it will just go and do it.

This is the core risk with AI agents. The monkey is powerful, helpful, and eager to please - but it can't always tell the difference between you and someone pretending to be you.

What Is Prompt Injection?

When security researchers talk about "prompt injection," this is what they mean in plain terms:

Somebody else has put an instruction where they're hoping your monkey will see it. And when it sees it, there's a good chance it will just go and do whatever that instruction says.

That instruction might be hidden in an email. It might be buried in a document. It might be embedded in a webpage the agent visits on your behalf. Your monkey sees it, thinks it's a legitimate task, and acts.

The consequences depend on what your monkey has access to. If it can read your email, it can forward sensitive messages. If it can manage your calendar, it can accept meeting invites. If it has access to your bank account (and some people are giving agents exactly that kind of access) it can transfer money.

This isn't theoretical.

Security researchers have demonstrated extracting cryptocurrency private keys from compromised OpenClaw instances in under five minutes using prompt injection attacks.

The Levers Your Monkey Can Pull

To understand the risk, you need to understand what the monkey has been given access to.

OpenClaw and similar AI agents work by connecting to the tools you already use: email, calendar, Slack, WhatsApp, file systems, even your terminal. The more connections you give it, the more capable it becomes, but also the more damage it can do if compromised.

Think of each integration as a lever your monkey can pull:

  • Email access → It can read, send, and delete messages as you
  • Calendar access → It can see your schedule, accept invites, book meetings
  • File system access → It can read, write, and delete documents
  • Browser access → It can fill out forms, click buttons, navigate websites
  • Terminal access → It can execute code and system commands
  • Messaging apps → It can send messages as you to your contacts

The monkey doesn't distinguish between "pull this lever because my owner asked" and "pull this lever because someone in an email told me to." It just pulls levers.

Why This Moment Is Different

AI assistants aren't new. Siri, Alexa, and Google Assistant have been around for years. But those assistants are limited by design: they can set timers and play music, but they can't take meaningful action in your digital life.

OpenClaw is different.

It's the first widely adopted AI agent that can actually do things across your most sensitive systems. It can book flights, negotiate purchases, manage your inbox, and execute code, all without you being present.

This is genuinely useful. It's also genuinely risky.

And right now, in early February 2026, we're watching what happens when a powerful new capability goes viral before the security practices catch up. In the past week alone:

  • Security researchers found over 42,000 OpenClaw instances exposed to the public internet
  • Malware families are already targeting OpenClaw configuration files to steal credentials
  • Over 230 malicious plugins were discovered in the OpenClaw skills marketplace
  • A fake VS Code extension impersonating OpenClaw was caught installing remote access malware

The monkey is out of the cage. The question is whether your organisation is ready.

Three Questions for Your Next Leadership Meeting

When AI agents come up in conversation, here are the questions that matter:

1. What levers have we given the monkey?

Map every integration. If someone in your organisation is running an AI agent, what systems does it have access to? Email? Calendar? File storage? Financial systems? The blast radius of a compromised agent equals every tool it can touch.

2. Who else can talk to our monkey?

AI agents that connect to email, messaging apps, or the web are exposed to untrusted input. Anyone who can send your agent a message or embed instructions in a document it might read, can potentially influence its behaviour. That's a fundamentally different threat model than traditional software.

3. What's our policy on experimentation?

Your people are curious.

They're going to experiment whether you've figured out the guardrails or not. The question is whether they're doing it on work devices connected to corporate systems, or on isolated personal machines with no sensitive data. One of those is manageable. The other is a breach waiting to happen.

The Bottom Line

AI agents are coming. They represent a genuine leap in what's possible: autonomous assistants that can take action on your behalf, around the clock, across every system you use.

But the same capabilities that make them powerful make them dangerous when misconfigured or compromised. And right now, the technology is moving faster than the security practices.

The obedient monkey metaphor captures the essential dynamic: you have a capable, eager assistant that will do what it's told, by you or by anyone else who figures out how to talk to it.

To enable experimentation safely, we all have the responsibility to ensure the monkey is on a very short leash until we figure out how to make it safe.

What Metomic Can Help With

If your organisation is grappling with AI agent risk, or the broader wave of agentic AI heading your way, we can help.

Metomic offers 1:1 AI Readiness Strategy Workshops (with our CTO, Ben van Enckevort) for enterprise teams, designed to help you:

  • Understand the risk landscape for agentic AI in your specific context
  • Build governance frameworks before your employees start experimenting on their own
  • Create safe experimentation environments that enable innovation without exposure
  • Develop policies that balance speed with appropriate guardrails

The technology will mature but right now, you need a plan to experiment safely.

Appendix: Sources
  1. Cyber Unit Security – "Clawdbot Update: From Viral Sensation to Security Cautionary Tale in One Week"February 1, 2026https://cyberunit.com/insights/clawdbot-moltbot-security-update/
  2. Jamieson O'Reilly – Original exposure research, 900+ vulnerable instances identifiedJanuary 25–26, 2026https://x.com/JamiesonOReilly (thread)
  3. Maor Dayan – 42,665 exposed instances, 93.4% with authentication bypassJanuary 28–31, 2026https://x.com/mikifreimann/status/1884621802117914947
  4. VentureBeat – "Infostealers added Clawdbot to their target lists before most security teams knew it was running"January 29, 2026https://venturebeat.com/security/clawdbot-exploits-48-hours-what-broke
  5. Aikido Security – Fake VS Code extension discoveryJanuary 27, 2026https://www.aikido.dev/blog/clawdbot-malicious-vscode-extension
  6. 404 Media – "AI Agent Social Network Database Left Wide Open"January 31, 2026https://www.404media.co/ai-agent-social-network-moltbook-database-left-wide-open/
  7. Cisco Talos / Martin Lee – "What Would Elon Do?" skill analysis (data exfiltration, prompt injection)January 30, 2026https://x.com/mrtlee/status/1884948266310385952
  8. OpenSourceMalware.com – 230+ malicious skills catalogued in ClawdHubJanuary 2026https://opensourcemalware.com (repository)
  9. Juan Carlos Munera – Infostealer malware targeting OpenClaw configuration filesJanuary 2026https://x.com/Jucamu10 (thread)
  10. Yotam Perkal – Vulnerability timeline compilationJanuary 2026https://x.com/YotamPerkal (thread)
  11. Heather Adkins (Google Security Team founding member) – Public advisory: "Don't run Clawdbot"January 2026Cited in Cyber Unit, VentureBeat
  12. SlowMist – Blockchain security analysis of $CLAWD token scamJanuary 27, 2026Cited in Cyber Unit
  13. Malwarebytes – Impersonation campaign analysisJanuary 2026Cited in Cyber Unit

Your team is talking about AI agents. Maybe they're already experimenting with OpenClaw, the open-source assistant that's taken the tech world by storm this past week. Maybe they're asking for budget to explore "agentic AI." Maybe someone just sent you a breathless article about the future of autonomous assistants.

You need a mental model that demystifies AI, something you can use in your next board meeting and which makes the risks intuitive without requiring a computer science degree.

Here it is.

The Obedient Monkey

AI agents like OpenClaw (previously ClawBot) are best understood as obedient monkeys.

A monkey is very capable since it can use tools and pull levers. It can really do things in the real world. People are using OpenClaw to negotiate car prices with dealerships, make restaurant reservations by phone, book flights, clear inboxes, and manage calendars - all autonomously and without human input.

But here's the thing about your monkey: it's not very discerning.

It's trying to listen to your instructions and do its best to get them done. But it's also willing to listen to anybody else's instructions. It doesn't have a strong sense of who should be giving it orders. If someone else tells it to do something, there's a good chance it will just go and do it.

This is the core risk with AI agents. The monkey is powerful, helpful, and eager to please - but it can't always tell the difference between you and someone pretending to be you.

What Is Prompt Injection?

When security researchers talk about "prompt injection," this is what they mean in plain terms:

Somebody else has put an instruction where they're hoping your monkey will see it. And when it sees it, there's a good chance it will just go and do whatever that instruction says.

That instruction might be hidden in an email. It might be buried in a document. It might be embedded in a webpage the agent visits on your behalf. Your monkey sees it, thinks it's a legitimate task, and acts.

The consequences depend on what your monkey has access to. If it can read your email, it can forward sensitive messages. If it can manage your calendar, it can accept meeting invites. If it has access to your bank account (and some people are giving agents exactly that kind of access) it can transfer money.

This isn't theoretical.

Security researchers have demonstrated extracting cryptocurrency private keys from compromised OpenClaw instances in under five minutes using prompt injection attacks.

The Levers Your Monkey Can Pull

To understand the risk, you need to understand what the monkey has been given access to.

OpenClaw and similar AI agents work by connecting to the tools you already use: email, calendar, Slack, WhatsApp, file systems, even your terminal. The more connections you give it, the more capable it becomes, but also the more damage it can do if compromised.

Think of each integration as a lever your monkey can pull:

  • Email access → It can read, send, and delete messages as you
  • Calendar access → It can see your schedule, accept invites, book meetings
  • File system access → It can read, write, and delete documents
  • Browser access → It can fill out forms, click buttons, navigate websites
  • Terminal access → It can execute code and system commands
  • Messaging apps → It can send messages as you to your contacts

The monkey doesn't distinguish between "pull this lever because my owner asked" and "pull this lever because someone in an email told me to." It just pulls levers.

Why This Moment Is Different

AI assistants aren't new. Siri, Alexa, and Google Assistant have been around for years. But those assistants are limited by design: they can set timers and play music, but they can't take meaningful action in your digital life.

OpenClaw is different.

It's the first widely adopted AI agent that can actually do things across your most sensitive systems. It can book flights, negotiate purchases, manage your inbox, and execute code, all without you being present.

This is genuinely useful. It's also genuinely risky.

And right now, in early February 2026, we're watching what happens when a powerful new capability goes viral before the security practices catch up. In the past week alone:

  • Security researchers found over 42,000 OpenClaw instances exposed to the public internet
  • Malware families are already targeting OpenClaw configuration files to steal credentials
  • Over 230 malicious plugins were discovered in the OpenClaw skills marketplace
  • A fake VS Code extension impersonating OpenClaw was caught installing remote access malware

The monkey is out of the cage. The question is whether your organisation is ready.

Three Questions for Your Next Leadership Meeting

When AI agents come up in conversation, here are the questions that matter:

1. What levers have we given the monkey?

Map every integration. If someone in your organisation is running an AI agent, what systems does it have access to? Email? Calendar? File storage? Financial systems? The blast radius of a compromised agent equals every tool it can touch.

2. Who else can talk to our monkey?

AI agents that connect to email, messaging apps, or the web are exposed to untrusted input. Anyone who can send your agent a message or embed instructions in a document it might read, can potentially influence its behaviour. That's a fundamentally different threat model than traditional software.

3. What's our policy on experimentation?

Your people are curious.

They're going to experiment whether you've figured out the guardrails or not. The question is whether they're doing it on work devices connected to corporate systems, or on isolated personal machines with no sensitive data. One of those is manageable. The other is a breach waiting to happen.

The Bottom Line

AI agents are coming. They represent a genuine leap in what's possible: autonomous assistants that can take action on your behalf, around the clock, across every system you use.

But the same capabilities that make them powerful make them dangerous when misconfigured or compromised. And right now, the technology is moving faster than the security practices.

The obedient monkey metaphor captures the essential dynamic: you have a capable, eager assistant that will do what it's told, by you or by anyone else who figures out how to talk to it.

To enable experimentation safely, we all have the responsibility to ensure the monkey is on a very short leash until we figure out how to make it safe.

What Metomic Can Help With

If your organisation is grappling with AI agent risk, or the broader wave of agentic AI heading your way, we can help.

Metomic offers 1:1 AI Readiness Strategy Workshops (with our CTO, Ben van Enckevort) for enterprise teams, designed to help you:

  • Understand the risk landscape for agentic AI in your specific context
  • Build governance frameworks before your employees start experimenting on their own
  • Create safe experimentation environments that enable innovation without exposure
  • Develop policies that balance speed with appropriate guardrails

The technology will mature but right now, you need a plan to experiment safely.

Appendix: Sources
  1. Cyber Unit Security – "Clawdbot Update: From Viral Sensation to Security Cautionary Tale in One Week"February 1, 2026https://cyberunit.com/insights/clawdbot-moltbot-security-update/
  2. Jamieson O'Reilly – Original exposure research, 900+ vulnerable instances identifiedJanuary 25–26, 2026https://x.com/JamiesonOReilly (thread)
  3. Maor Dayan – 42,665 exposed instances, 93.4% with authentication bypassJanuary 28–31, 2026https://x.com/mikifreimann/status/1884621802117914947
  4. VentureBeat – "Infostealers added Clawdbot to their target lists before most security teams knew it was running"January 29, 2026https://venturebeat.com/security/clawdbot-exploits-48-hours-what-broke
  5. Aikido Security – Fake VS Code extension discoveryJanuary 27, 2026https://www.aikido.dev/blog/clawdbot-malicious-vscode-extension
  6. 404 Media – "AI Agent Social Network Database Left Wide Open"January 31, 2026https://www.404media.co/ai-agent-social-network-moltbook-database-left-wide-open/
  7. Cisco Talos / Martin Lee – "What Would Elon Do?" skill analysis (data exfiltration, prompt injection)January 30, 2026https://x.com/mrtlee/status/1884948266310385952
  8. OpenSourceMalware.com – 230+ malicious skills catalogued in ClawdHubJanuary 2026https://opensourcemalware.com (repository)
  9. Juan Carlos Munera – Infostealer malware targeting OpenClaw configuration filesJanuary 2026https://x.com/Jucamu10 (thread)
  10. Yotam Perkal – Vulnerability timeline compilationJanuary 2026https://x.com/YotamPerkal (thread)
  11. Heather Adkins (Google Security Team founding member) – Public advisory: "Don't run Clawdbot"January 2026Cited in Cyber Unit, VentureBeat
  12. SlowMist – Blockchain security analysis of $CLAWD token scamJanuary 27, 2026Cited in Cyber Unit
  13. Malwarebytes – Impersonation campaign analysisJanuary 2026Cited in Cyber Unit

Your team is talking about AI agents. Maybe they're already experimenting with OpenClaw, the open-source assistant that's taken the tech world by storm this past week. Maybe they're asking for budget to explore "agentic AI." Maybe someone just sent you a breathless article about the future of autonomous assistants.

You need a mental model that demystifies AI, something you can use in your next board meeting and which makes the risks intuitive without requiring a computer science degree.

Here it is.

The Obedient Monkey

AI agents like OpenClaw (previously ClawBot) are best understood as obedient monkeys.

A monkey is very capable since it can use tools and pull levers. It can really do things in the real world. People are using OpenClaw to negotiate car prices with dealerships, make restaurant reservations by phone, book flights, clear inboxes, and manage calendars - all autonomously and without human input.

But here's the thing about your monkey: it's not very discerning.

It's trying to listen to your instructions and do its best to get them done. But it's also willing to listen to anybody else's instructions. It doesn't have a strong sense of who should be giving it orders. If someone else tells it to do something, there's a good chance it will just go and do it.

This is the core risk with AI agents. The monkey is powerful, helpful, and eager to please - but it can't always tell the difference between you and someone pretending to be you.

What Is Prompt Injection?

When security researchers talk about "prompt injection," this is what they mean in plain terms:

Somebody else has put an instruction where they're hoping your monkey will see it. And when it sees it, there's a good chance it will just go and do whatever that instruction says.

That instruction might be hidden in an email. It might be buried in a document. It might be embedded in a webpage the agent visits on your behalf. Your monkey sees it, thinks it's a legitimate task, and acts.

The consequences depend on what your monkey has access to. If it can read your email, it can forward sensitive messages. If it can manage your calendar, it can accept meeting invites. If it has access to your bank account (and some people are giving agents exactly that kind of access) it can transfer money.

This isn't theoretical.

Security researchers have demonstrated extracting cryptocurrency private keys from compromised OpenClaw instances in under five minutes using prompt injection attacks.

The Levers Your Monkey Can Pull

To understand the risk, you need to understand what the monkey has been given access to.

OpenClaw and similar AI agents work by connecting to the tools you already use: email, calendar, Slack, WhatsApp, file systems, even your terminal. The more connections you give it, the more capable it becomes, but also the more damage it can do if compromised.

Think of each integration as a lever your monkey can pull:

  • Email access → It can read, send, and delete messages as you
  • Calendar access → It can see your schedule, accept invites, book meetings
  • File system access → It can read, write, and delete documents
  • Browser access → It can fill out forms, click buttons, navigate websites
  • Terminal access → It can execute code and system commands
  • Messaging apps → It can send messages as you to your contacts

The monkey doesn't distinguish between "pull this lever because my owner asked" and "pull this lever because someone in an email told me to." It just pulls levers.

Why This Moment Is Different

AI assistants aren't new. Siri, Alexa, and Google Assistant have been around for years. But those assistants are limited by design: they can set timers and play music, but they can't take meaningful action in your digital life.

OpenClaw is different.

It's the first widely adopted AI agent that can actually do things across your most sensitive systems. It can book flights, negotiate purchases, manage your inbox, and execute code, all without you being present.

This is genuinely useful. It's also genuinely risky.

And right now, in early February 2026, we're watching what happens when a powerful new capability goes viral before the security practices catch up. In the past week alone:

  • Security researchers found over 42,000 OpenClaw instances exposed to the public internet
  • Malware families are already targeting OpenClaw configuration files to steal credentials
  • Over 230 malicious plugins were discovered in the OpenClaw skills marketplace
  • A fake VS Code extension impersonating OpenClaw was caught installing remote access malware

The monkey is out of the cage. The question is whether your organisation is ready.

Three Questions for Your Next Leadership Meeting

When AI agents come up in conversation, here are the questions that matter:

1. What levers have we given the monkey?

Map every integration. If someone in your organisation is running an AI agent, what systems does it have access to? Email? Calendar? File storage? Financial systems? The blast radius of a compromised agent equals every tool it can touch.

2. Who else can talk to our monkey?

AI agents that connect to email, messaging apps, or the web are exposed to untrusted input. Anyone who can send your agent a message or embed instructions in a document it might read, can potentially influence its behaviour. That's a fundamentally different threat model than traditional software.

3. What's our policy on experimentation?

Your people are curious.

They're going to experiment whether you've figured out the guardrails or not. The question is whether they're doing it on work devices connected to corporate systems, or on isolated personal machines with no sensitive data. One of those is manageable. The other is a breach waiting to happen.

The Bottom Line

AI agents are coming. They represent a genuine leap in what's possible: autonomous assistants that can take action on your behalf, around the clock, across every system you use.

But the same capabilities that make them powerful make them dangerous when misconfigured or compromised. And right now, the technology is moving faster than the security practices.

The obedient monkey metaphor captures the essential dynamic: you have a capable, eager assistant that will do what it's told, by you or by anyone else who figures out how to talk to it.

To enable experimentation safely, we all have the responsibility to ensure the monkey is on a very short leash until we figure out how to make it safe.

What Metomic Can Help With

If your organisation is grappling with AI agent risk, or the broader wave of agentic AI heading your way, we can help.

Metomic offers 1:1 AI Readiness Strategy Workshops (with our CTO, Ben van Enckevort) for enterprise teams, designed to help you:

  • Understand the risk landscape for agentic AI in your specific context
  • Build governance frameworks before your employees start experimenting on their own
  • Create safe experimentation environments that enable innovation without exposure
  • Develop policies that balance speed with appropriate guardrails

The technology will mature but right now, you need a plan to experiment safely.

Appendix: Sources
  1. Cyber Unit Security – "Clawdbot Update: From Viral Sensation to Security Cautionary Tale in One Week"February 1, 2026https://cyberunit.com/insights/clawdbot-moltbot-security-update/
  2. Jamieson O'Reilly – Original exposure research, 900+ vulnerable instances identifiedJanuary 25–26, 2026https://x.com/JamiesonOReilly (thread)
  3. Maor Dayan – 42,665 exposed instances, 93.4% with authentication bypassJanuary 28–31, 2026https://x.com/mikifreimann/status/1884621802117914947
  4. VentureBeat – "Infostealers added Clawdbot to their target lists before most security teams knew it was running"January 29, 2026https://venturebeat.com/security/clawdbot-exploits-48-hours-what-broke
  5. Aikido Security – Fake VS Code extension discoveryJanuary 27, 2026https://www.aikido.dev/blog/clawdbot-malicious-vscode-extension
  6. 404 Media – "AI Agent Social Network Database Left Wide Open"January 31, 2026https://www.404media.co/ai-agent-social-network-moltbook-database-left-wide-open/
  7. Cisco Talos / Martin Lee – "What Would Elon Do?" skill analysis (data exfiltration, prompt injection)January 30, 2026https://x.com/mrtlee/status/1884948266310385952
  8. OpenSourceMalware.com – 230+ malicious skills catalogued in ClawdHubJanuary 2026https://opensourcemalware.com (repository)
  9. Juan Carlos Munera – Infostealer malware targeting OpenClaw configuration filesJanuary 2026https://x.com/Jucamu10 (thread)
  10. Yotam Perkal – Vulnerability timeline compilationJanuary 2026https://x.com/YotamPerkal (thread)
  11. Heather Adkins (Google Security Team founding member) – Public advisory: "Don't run Clawdbot"January 2026Cited in Cyber Unit, VentureBeat
  12. SlowMist – Blockchain security analysis of $CLAWD token scamJanuary 27, 2026Cited in Cyber Unit
  13. Malwarebytes – Impersonation campaign analysisJanuary 2026Cited in Cyber Unit