Case Studies
January 12, 2026

Inside Gorilla’s AI Governance Strategy | Deploying Gemini Securely with Metomic

How Gorilla deployed Google Gemini in energy retail: Metomic's data governance framework protects sensitive customer data while enabling AI productivity.

Download
Download
Case Studies
January 12, 2026

Inside Gorilla’s AI Governance Strategy | Deploying Gemini Securely with Metomic

How Gorilla deployed Google Gemini in energy retail: Metomic's data governance framework protects sensitive customer data while enabling AI productivity.

Download
Download

Gorilla, a data analytics platform serving the energy retail industry, successfully deployed Google Gemini across their organization while maintaining strict data governance controls required by their highly regulated sector. By partnering with Metomic, they created a framework for secure AI enablement that protects sensitive customer data while unlocking productivity gains for employees.

Key Results:

  • Deployed Gemini with granular access controls across Google Workspace
  • Identified and remediated over-exposed sensitive documents before AI indexing
  • Established quarterly adversarial querying process to prevent data leakage
  • Created scalable AI governance framework for future tool adoption

The Challenge: Balancing AI Innovation with Energy Sector Compliance

The energy retail sector operates under stringent data protection requirements. As Christopher Callaghan, Security and Compliance lead at Gorilla, explains: "Energy retailers are quite sensitive when it comes to data. Anything that goes on within the industry hits the news, hence security and compliance ends up being pretty critical to them."

The Gemini Deployment Dilemma

When Gorilla evaluated Google Gemini for their Google Workspace environment, they faced a critical challenge that many enterprises encounter: Gemini's strength as a knowledge retrieval tool became a potential security vulnerability.

The core issue? Gemini respects existing Google Drive permissions—but years of collaborative work had created permission sprawl that made deployment risky.

"The benefit of Drive is that it is amazing in collaboration," Chris notes. "But it is clear that it slowly erodes the basics of role-based access. People are pulled into documents, but documents are constantly evolving. Now with Gemini, they just have to ask the right question and it appears."

Specific Security Concerns

The Intern Problem: An intern could now query "What's the CEO's private diary entries?" and potentially access sensitive information they would never have found through traditional file browsing.

Document Evolution: A document shared three years ago for planning purposes might now contain salary information or hiring roadmaps, but original collaborators retain access without knowing the content has changed.

Customer Data Exposure: With Gorilla serving energy retailers handling sensitive pricing and forecasting data, any data leakage could damage client relationships and regulatory standing.

The Approach: Multi-Layered AI Governance

1. Establishing AI Governance Structure

Gorilla built a pyramid governance model:

AI Management Group: Technical leaders, security/compliance professionals, and operations team members who set strategic direction and policy.

Gen AI Interest Group: Early adopters who complete AI training and demonstrate competency. This group tests new tools, identifies use cases, and helps pull along skeptical employees.

"We need these early adopters to drive forward and say, here's all the interesting things we can do, and pull along those people that aren't necessarily so interested," Chris explains.

2. Strategic Deployment Boundaries

Gorilla took a phased approach to AI adoption:

Green Zone (Approved): Internal productivity tools for operations, HR processes, and administrative tasks where data sensitivity is lower.

Yellow Zone (Controlled): Google Gemini deployment with extensive access controls and continuous monitoring.

Red Zone (Restricted): Customer data and product features remain off-limits until more mature AI governance controls are in place.

"We can't necessarily add OpenAI as a sub-processor—that's just going to send alarm bells to our customers," Chris notes. "We need to be able to put guardrails around those little pockets in the business."

3. Technical Implementation with Metomic

Gorilla partnered with Metomic to address Gemini's native security limitations:

Automated Content Classification: Metomic scans Google Drive to identify and label sensitive documents based on content patterns (customer data, financial information, personal data, etc.).

Permission Remediation: Files containing sensitive data are either:

  • Restricted to appropriate users only
  • Marked as excluded from Gemini indexing using Google's undocumented exclusion flag
  • Moved to secure repositories outside Google Drive

Continuous Monitoring: Real-time alerts via Slack when sensitive documents are shared inappropriately or permissions change on classified files.

User Education: Labeling appears directly on files, making employees aware of data sensitivity and their handling responsibilities.

4. Adversarial Querying Program

Recognizing that perfect prevention is impossible in collaborative environments, Gorilla implemented quarterly "red team" exercises:

"We're coming up with some kind of adversarial querying that we're just going to have to run quarterly to basically make sure that certain key people—if there was a document that had changed context that they now have access to sensitive information—we can flag it early."

This proactive testing helps identify:

  • Permission drift over time
  • Documents that evolved to contain sensitive content
  • New integration points that might expose data
  • Gaps in existing controls

Results and Impact

Security Outcomes

Risk Reduction: Gemini deployed to initial user group with confidence that sensitive customer data remains protected.

Visibility Improvement: Complete mapping of sensitive data locations across Google Workspace, something that didn't exist before the AI deployment project.

Policy Evolution: Shift from specific policies to principle-based frameworks that adapt as AI tools evolve.

Productivity Gains

Knowledge Access: Employees can now find information instantly rather than interrupting colleagues or searching through folder hierarchies.

Onboarding Acceleration: New team members access institutional knowledge without requiring extensive documentation or training time.

Creative Focus: "It makes work a purely creative pursuit," Chris envisions. "If you've got the information, it's there, now you do with it what you need to do with that."

Organizational Benefits

Competitive Positioning: Gorilla can now confidently discuss AI capabilities with security-conscious energy clients, differentiating from competitors.

Scalable Framework: The governance structure established for Gemini applies to future AI tool adoption, whether Notion AI, Asana's AI features, or custom solutions.

Cultural Shift: AI interest group creates internal champions who drive adoption while remaining aware of risks.

Key Lessons for Enterprise Gemini Deployment

1. Native Controls Are Insufficient

Google Gemini respects Drive permissions but lacks granular controls for:

  • Excluding specific documents or folders from indexing
  • Preventing retrieval of content based on sensitivity labels
  • Blocking specific types of queries
  • Providing audit logs of what information was retrieved

Solution: Layer third-party governance tools like Metomic on top of native capabilities.

2. Historical Permission Sprawl Is Your Biggest Risk

Years of clicking "Get Link" to share files publicly within your organization creates a massive attack surface once AI indexing begins.

Solution: Audit and remediate existing permissions before deploying AI tools. Shift to direct sharing with specific users or groups.

3. Document Evolution Requires Continuous Monitoring

A planning document from three years ago might now contain salary information, but original collaborators don't know the content changed.

Solution: Implement continuous monitoring and quarterly adversarial testing rather than one-time audits.

4. Education Is As Important As Technology

Even with perfect technical controls, employees need to understand:

  • What constitutes sensitive data
  • Why AI access differs from traditional file access
  • Their responsibilities when creating or sharing content

Solution: Combine automated labeling with Slack notifications and training programs.

5. Start Small, Scale Thoughtfully

Not every department or use case requires AI access immediately.

Solution: Begin with low-risk teams, gather feedback, refine controls, then expand gradually.

The Future of AI Governance at Gorilla

Short-Term Goals

Expanded Gemini Access: Rolling out to additional teams based on successful pilot program results.

Integration Management: Evaluating third-party integrations that feed data into Google Drive and ensuring they inherit appropriate permissions and controls.

RAG System Exploration: Investigating retrieval-augmented generation systems where different employee roles access different knowledge bases (e.g., intern bot vs. finance bot vs. operations bot).

Long-Term Vision

Agentic AI: Eventually deploying AI agents that can take actions on behalf of employees, with strict scoping to prevent unauthorized data access.

Customer-Facing AI: Once internal controls are proven, exploring AI features within Gorilla's product offerings for energy retail clients.

Automated Data Curation: Moving toward systems that automatically identify, redact, and provision the right data for AI tools based on use case and user role.

How to Apply This Framework to Your Gemini Deployment

Step 1: Assess Your Current State

Questions to Answer:

  • How many documents in Google Drive are shared "Anyone with the link"?
  • Do you have visibility into what sensitive data exists across Drive?
  • What percentage of employees have access to customer data, financial information, or other sensitive content?
  • How long has your organization been using Google Drive without regular permission audits?

Step 2: Establish Governance Structure

Create Two Groups:

  • AI Management Committee: Cross-functional leadership setting policies and priorities
  • AI Interest Group: Early adopters who test tools and identify use cases

Define Clear Boundaries:

  • What types of data are off-limits for AI tools?
  • Which departments or teams can access AI first?
  • What constitutes acceptable vs. risky use cases?

Step 3: Implement Technical Controls

Immediate Actions:

  • Audit existing Google Drive permissions
  • Identify and classify sensitive documents
  • Remediate over-exposed content
  • Configure Gemini settings to maximize available security controls

Ongoing Requirements:

  • Real-time monitoring of permission changes
  • Automated alerts for sensitive data sharing
  • Regular adversarial querying exercises
  • User education and labeling

Step 4: Deploy and Iterate

Pilot Program:

Expansion:

Conclusion

Gorilla's approach to Gemini deployment demonstrates that enterprise AI adoption doesn't require choosing between security and productivity. By implementing robust data governance before AI enablement, organizations can unlock the full potential of tools like Gemini while protecting sensitive information.

The key insight: AI security is centred around ensuring AI accesses the right data at the right time for the right people.

As Chris concludes: "I would love to reach a point where everybody has exactly the information that they need to do their job. Nothing's living in someone's head. No one's having to constantly bug you on Slack. That would be amazing."

With the right governance framework, that vision is achievable—even in highly regulated industries like energy retail.

Gorilla, a data analytics platform serving the energy retail industry, successfully deployed Google Gemini across their organization while maintaining strict data governance controls required by their highly regulated sector. By partnering with Metomic, they created a framework for secure AI enablement that protects sensitive customer data while unlocking productivity gains for employees.

Key Results:

  • Deployed Gemini with granular access controls across Google Workspace
  • Identified and remediated over-exposed sensitive documents before AI indexing
  • Established quarterly adversarial querying process to prevent data leakage
  • Created scalable AI governance framework for future tool adoption

The Challenge: Balancing AI Innovation with Energy Sector Compliance

The energy retail sector operates under stringent data protection requirements. As Christopher Callaghan, Security and Compliance lead at Gorilla, explains: "Energy retailers are quite sensitive when it comes to data. Anything that goes on within the industry hits the news, hence security and compliance ends up being pretty critical to them."

The Gemini Deployment Dilemma

When Gorilla evaluated Google Gemini for their Google Workspace environment, they faced a critical challenge that many enterprises encounter: Gemini's strength as a knowledge retrieval tool became a potential security vulnerability.

The core issue? Gemini respects existing Google Drive permissions—but years of collaborative work had created permission sprawl that made deployment risky.

"The benefit of Drive is that it is amazing in collaboration," Chris notes. "But it is clear that it slowly erodes the basics of role-based access. People are pulled into documents, but documents are constantly evolving. Now with Gemini, they just have to ask the right question and it appears."

Specific Security Concerns

The Intern Problem: An intern could now query "What's the CEO's private diary entries?" and potentially access sensitive information they would never have found through traditional file browsing.

Document Evolution: A document shared three years ago for planning purposes might now contain salary information or hiring roadmaps, but original collaborators retain access without knowing the content has changed.

Customer Data Exposure: With Gorilla serving energy retailers handling sensitive pricing and forecasting data, any data leakage could damage client relationships and regulatory standing.

The Approach: Multi-Layered AI Governance

1. Establishing AI Governance Structure

Gorilla built a pyramid governance model:

AI Management Group: Technical leaders, security/compliance professionals, and operations team members who set strategic direction and policy.

Gen AI Interest Group: Early adopters who complete AI training and demonstrate competency. This group tests new tools, identifies use cases, and helps pull along skeptical employees.

"We need these early adopters to drive forward and say, here's all the interesting things we can do, and pull along those people that aren't necessarily so interested," Chris explains.

2. Strategic Deployment Boundaries

Gorilla took a phased approach to AI adoption:

Green Zone (Approved): Internal productivity tools for operations, HR processes, and administrative tasks where data sensitivity is lower.

Yellow Zone (Controlled): Google Gemini deployment with extensive access controls and continuous monitoring.

Red Zone (Restricted): Customer data and product features remain off-limits until more mature AI governance controls are in place.

"We can't necessarily add OpenAI as a sub-processor—that's just going to send alarm bells to our customers," Chris notes. "We need to be able to put guardrails around those little pockets in the business."

3. Technical Implementation with Metomic

Gorilla partnered with Metomic to address Gemini's native security limitations:

Automated Content Classification: Metomic scans Google Drive to identify and label sensitive documents based on content patterns (customer data, financial information, personal data, etc.).

Permission Remediation: Files containing sensitive data are either:

  • Restricted to appropriate users only
  • Marked as excluded from Gemini indexing using Google's undocumented exclusion flag
  • Moved to secure repositories outside Google Drive

Continuous Monitoring: Real-time alerts via Slack when sensitive documents are shared inappropriately or permissions change on classified files.

User Education: Labeling appears directly on files, making employees aware of data sensitivity and their handling responsibilities.

4. Adversarial Querying Program

Recognizing that perfect prevention is impossible in collaborative environments, Gorilla implemented quarterly "red team" exercises:

"We're coming up with some kind of adversarial querying that we're just going to have to run quarterly to basically make sure that certain key people—if there was a document that had changed context that they now have access to sensitive information—we can flag it early."

This proactive testing helps identify:

  • Permission drift over time
  • Documents that evolved to contain sensitive content
  • New integration points that might expose data
  • Gaps in existing controls

Results and Impact

Security Outcomes

Risk Reduction: Gemini deployed to initial user group with confidence that sensitive customer data remains protected.

Visibility Improvement: Complete mapping of sensitive data locations across Google Workspace, something that didn't exist before the AI deployment project.

Policy Evolution: Shift from specific policies to principle-based frameworks that adapt as AI tools evolve.

Productivity Gains

Knowledge Access: Employees can now find information instantly rather than interrupting colleagues or searching through folder hierarchies.

Onboarding Acceleration: New team members access institutional knowledge without requiring extensive documentation or training time.

Creative Focus: "It makes work a purely creative pursuit," Chris envisions. "If you've got the information, it's there, now you do with it what you need to do with that."

Organizational Benefits

Competitive Positioning: Gorilla can now confidently discuss AI capabilities with security-conscious energy clients, differentiating from competitors.

Scalable Framework: The governance structure established for Gemini applies to future AI tool adoption, whether Notion AI, Asana's AI features, or custom solutions.

Cultural Shift: AI interest group creates internal champions who drive adoption while remaining aware of risks.

Key Lessons for Enterprise Gemini Deployment

1. Native Controls Are Insufficient

Google Gemini respects Drive permissions but lacks granular controls for:

  • Excluding specific documents or folders from indexing
  • Preventing retrieval of content based on sensitivity labels
  • Blocking specific types of queries
  • Providing audit logs of what information was retrieved

Solution: Layer third-party governance tools like Metomic on top of native capabilities.

2. Historical Permission Sprawl Is Your Biggest Risk

Years of clicking "Get Link" to share files publicly within your organization creates a massive attack surface once AI indexing begins.

Solution: Audit and remediate existing permissions before deploying AI tools. Shift to direct sharing with specific users or groups.

3. Document Evolution Requires Continuous Monitoring

A planning document from three years ago might now contain salary information, but original collaborators don't know the content changed.

Solution: Implement continuous monitoring and quarterly adversarial testing rather than one-time audits.

4. Education Is As Important As Technology

Even with perfect technical controls, employees need to understand:

  • What constitutes sensitive data
  • Why AI access differs from traditional file access
  • Their responsibilities when creating or sharing content

Solution: Combine automated labeling with Slack notifications and training programs.

5. Start Small, Scale Thoughtfully

Not every department or use case requires AI access immediately.

Solution: Begin with low-risk teams, gather feedback, refine controls, then expand gradually.

The Future of AI Governance at Gorilla

Short-Term Goals

Expanded Gemini Access: Rolling out to additional teams based on successful pilot program results.

Integration Management: Evaluating third-party integrations that feed data into Google Drive and ensuring they inherit appropriate permissions and controls.

RAG System Exploration: Investigating retrieval-augmented generation systems where different employee roles access different knowledge bases (e.g., intern bot vs. finance bot vs. operations bot).

Long-Term Vision

Agentic AI: Eventually deploying AI agents that can take actions on behalf of employees, with strict scoping to prevent unauthorized data access.

Customer-Facing AI: Once internal controls are proven, exploring AI features within Gorilla's product offerings for energy retail clients.

Automated Data Curation: Moving toward systems that automatically identify, redact, and provision the right data for AI tools based on use case and user role.

How to Apply This Framework to Your Gemini Deployment

Step 1: Assess Your Current State

Questions to Answer:

  • How many documents in Google Drive are shared "Anyone with the link"?
  • Do you have visibility into what sensitive data exists across Drive?
  • What percentage of employees have access to customer data, financial information, or other sensitive content?
  • How long has your organization been using Google Drive without regular permission audits?

Step 2: Establish Governance Structure

Create Two Groups:

  • AI Management Committee: Cross-functional leadership setting policies and priorities
  • AI Interest Group: Early adopters who test tools and identify use cases

Define Clear Boundaries:

  • What types of data are off-limits for AI tools?
  • Which departments or teams can access AI first?
  • What constitutes acceptable vs. risky use cases?

Step 3: Implement Technical Controls

Immediate Actions:

  • Audit existing Google Drive permissions
  • Identify and classify sensitive documents
  • Remediate over-exposed content
  • Configure Gemini settings to maximize available security controls

Ongoing Requirements:

  • Real-time monitoring of permission changes
  • Automated alerts for sensitive data sharing
  • Regular adversarial querying exercises
  • User education and labeling

Step 4: Deploy and Iterate

Pilot Program:

Expansion:

Conclusion

Gorilla's approach to Gemini deployment demonstrates that enterprise AI adoption doesn't require choosing between security and productivity. By implementing robust data governance before AI enablement, organizations can unlock the full potential of tools like Gemini while protecting sensitive information.

The key insight: AI security is centred around ensuring AI accesses the right data at the right time for the right people.

As Chris concludes: "I would love to reach a point where everybody has exactly the information that they need to do their job. Nothing's living in someone's head. No one's having to constantly bug you on Slack. That would be amazing."

With the right governance framework, that vision is achievable—even in highly regulated industries like energy retail.

Gorilla, a data analytics platform serving the energy retail industry, successfully deployed Google Gemini across their organization while maintaining strict data governance controls required by their highly regulated sector. By partnering with Metomic, they created a framework for secure AI enablement that protects sensitive customer data while unlocking productivity gains for employees.

Key Results:

  • Deployed Gemini with granular access controls across Google Workspace
  • Identified and remediated over-exposed sensitive documents before AI indexing
  • Established quarterly adversarial querying process to prevent data leakage
  • Created scalable AI governance framework for future tool adoption

The Challenge: Balancing AI Innovation with Energy Sector Compliance

The energy retail sector operates under stringent data protection requirements. As Christopher Callaghan, Security and Compliance lead at Gorilla, explains: "Energy retailers are quite sensitive when it comes to data. Anything that goes on within the industry hits the news, hence security and compliance ends up being pretty critical to them."

The Gemini Deployment Dilemma

When Gorilla evaluated Google Gemini for their Google Workspace environment, they faced a critical challenge that many enterprises encounter: Gemini's strength as a knowledge retrieval tool became a potential security vulnerability.

The core issue? Gemini respects existing Google Drive permissions—but years of collaborative work had created permission sprawl that made deployment risky.

"The benefit of Drive is that it is amazing in collaboration," Chris notes. "But it is clear that it slowly erodes the basics of role-based access. People are pulled into documents, but documents are constantly evolving. Now with Gemini, they just have to ask the right question and it appears."

Specific Security Concerns

The Intern Problem: An intern could now query "What's the CEO's private diary entries?" and potentially access sensitive information they would never have found through traditional file browsing.

Document Evolution: A document shared three years ago for planning purposes might now contain salary information or hiring roadmaps, but original collaborators retain access without knowing the content has changed.

Customer Data Exposure: With Gorilla serving energy retailers handling sensitive pricing and forecasting data, any data leakage could damage client relationships and regulatory standing.

The Approach: Multi-Layered AI Governance

1. Establishing AI Governance Structure

Gorilla built a pyramid governance model:

AI Management Group: Technical leaders, security/compliance professionals, and operations team members who set strategic direction and policy.

Gen AI Interest Group: Early adopters who complete AI training and demonstrate competency. This group tests new tools, identifies use cases, and helps pull along skeptical employees.

"We need these early adopters to drive forward and say, here's all the interesting things we can do, and pull along those people that aren't necessarily so interested," Chris explains.

2. Strategic Deployment Boundaries

Gorilla took a phased approach to AI adoption:

Green Zone (Approved): Internal productivity tools for operations, HR processes, and administrative tasks where data sensitivity is lower.

Yellow Zone (Controlled): Google Gemini deployment with extensive access controls and continuous monitoring.

Red Zone (Restricted): Customer data and product features remain off-limits until more mature AI governance controls are in place.

"We can't necessarily add OpenAI as a sub-processor—that's just going to send alarm bells to our customers," Chris notes. "We need to be able to put guardrails around those little pockets in the business."

3. Technical Implementation with Metomic

Gorilla partnered with Metomic to address Gemini's native security limitations:

Automated Content Classification: Metomic scans Google Drive to identify and label sensitive documents based on content patterns (customer data, financial information, personal data, etc.).

Permission Remediation: Files containing sensitive data are either:

  • Restricted to appropriate users only
  • Marked as excluded from Gemini indexing using Google's undocumented exclusion flag
  • Moved to secure repositories outside Google Drive

Continuous Monitoring: Real-time alerts via Slack when sensitive documents are shared inappropriately or permissions change on classified files.

User Education: Labeling appears directly on files, making employees aware of data sensitivity and their handling responsibilities.

4. Adversarial Querying Program

Recognizing that perfect prevention is impossible in collaborative environments, Gorilla implemented quarterly "red team" exercises:

"We're coming up with some kind of adversarial querying that we're just going to have to run quarterly to basically make sure that certain key people—if there was a document that had changed context that they now have access to sensitive information—we can flag it early."

This proactive testing helps identify:

  • Permission drift over time
  • Documents that evolved to contain sensitive content
  • New integration points that might expose data
  • Gaps in existing controls

Results and Impact

Security Outcomes

Risk Reduction: Gemini deployed to initial user group with confidence that sensitive customer data remains protected.

Visibility Improvement: Complete mapping of sensitive data locations across Google Workspace, something that didn't exist before the AI deployment project.

Policy Evolution: Shift from specific policies to principle-based frameworks that adapt as AI tools evolve.

Productivity Gains

Knowledge Access: Employees can now find information instantly rather than interrupting colleagues or searching through folder hierarchies.

Onboarding Acceleration: New team members access institutional knowledge without requiring extensive documentation or training time.

Creative Focus: "It makes work a purely creative pursuit," Chris envisions. "If you've got the information, it's there, now you do with it what you need to do with that."

Organizational Benefits

Competitive Positioning: Gorilla can now confidently discuss AI capabilities with security-conscious energy clients, differentiating from competitors.

Scalable Framework: The governance structure established for Gemini applies to future AI tool adoption, whether Notion AI, Asana's AI features, or custom solutions.

Cultural Shift: AI interest group creates internal champions who drive adoption while remaining aware of risks.

Key Lessons for Enterprise Gemini Deployment

1. Native Controls Are Insufficient

Google Gemini respects Drive permissions but lacks granular controls for:

  • Excluding specific documents or folders from indexing
  • Preventing retrieval of content based on sensitivity labels
  • Blocking specific types of queries
  • Providing audit logs of what information was retrieved

Solution: Layer third-party governance tools like Metomic on top of native capabilities.

2. Historical Permission Sprawl Is Your Biggest Risk

Years of clicking "Get Link" to share files publicly within your organization creates a massive attack surface once AI indexing begins.

Solution: Audit and remediate existing permissions before deploying AI tools. Shift to direct sharing with specific users or groups.

3. Document Evolution Requires Continuous Monitoring

A planning document from three years ago might now contain salary information, but original collaborators don't know the content changed.

Solution: Implement continuous monitoring and quarterly adversarial testing rather than one-time audits.

4. Education Is As Important As Technology

Even with perfect technical controls, employees need to understand:

  • What constitutes sensitive data
  • Why AI access differs from traditional file access
  • Their responsibilities when creating or sharing content

Solution: Combine automated labeling with Slack notifications and training programs.

5. Start Small, Scale Thoughtfully

Not every department or use case requires AI access immediately.

Solution: Begin with low-risk teams, gather feedback, refine controls, then expand gradually.

The Future of AI Governance at Gorilla

Short-Term Goals

Expanded Gemini Access: Rolling out to additional teams based on successful pilot program results.

Integration Management: Evaluating third-party integrations that feed data into Google Drive and ensuring they inherit appropriate permissions and controls.

RAG System Exploration: Investigating retrieval-augmented generation systems where different employee roles access different knowledge bases (e.g., intern bot vs. finance bot vs. operations bot).

Long-Term Vision

Agentic AI: Eventually deploying AI agents that can take actions on behalf of employees, with strict scoping to prevent unauthorized data access.

Customer-Facing AI: Once internal controls are proven, exploring AI features within Gorilla's product offerings for energy retail clients.

Automated Data Curation: Moving toward systems that automatically identify, redact, and provision the right data for AI tools based on use case and user role.

How to Apply This Framework to Your Gemini Deployment

Step 1: Assess Your Current State

Questions to Answer:

  • How many documents in Google Drive are shared "Anyone with the link"?
  • Do you have visibility into what sensitive data exists across Drive?
  • What percentage of employees have access to customer data, financial information, or other sensitive content?
  • How long has your organization been using Google Drive without regular permission audits?

Step 2: Establish Governance Structure

Create Two Groups:

  • AI Management Committee: Cross-functional leadership setting policies and priorities
  • AI Interest Group: Early adopters who test tools and identify use cases

Define Clear Boundaries:

  • What types of data are off-limits for AI tools?
  • Which departments or teams can access AI first?
  • What constitutes acceptable vs. risky use cases?

Step 3: Implement Technical Controls

Immediate Actions:

  • Audit existing Google Drive permissions
  • Identify and classify sensitive documents
  • Remediate over-exposed content
  • Configure Gemini settings to maximize available security controls

Ongoing Requirements:

  • Real-time monitoring of permission changes
  • Automated alerts for sensitive data sharing
  • Regular adversarial querying exercises
  • User education and labeling

Step 4: Deploy and Iterate

Pilot Program:

Expansion:

Conclusion

Gorilla's approach to Gemini deployment demonstrates that enterprise AI adoption doesn't require choosing between security and productivity. By implementing robust data governance before AI enablement, organizations can unlock the full potential of tools like Gemini while protecting sensitive information.

The key insight: AI security is centred around ensuring AI accesses the right data at the right time for the right people.

As Chris concludes: "I would love to reach a point where everybody has exactly the information that they need to do their job. Nothing's living in someone's head. No one's having to constantly bug you on Slack. That would be amazing."

With the right governance framework, that vision is achievable—even in highly regulated industries like energy retail.