Phase 1: Ad-Hoc AI Adoption and Deployment

Phase 1: Ad-Hoc AI Adoption and Deployment

Awakening to AI – Discovery and Shadow Use

An enterprise organization’s journey with AI often begins in the shadows. Without formal strategy or oversight, employees start adopting generative AI tools such as ChatGPT, GitHub Copilot, and browser-based assistants to boost their productivity. While these tools deliver quick wins, such as faster writing, coding, and task automation, they introduce hidden and unmanaged risks. This early, decentralized adoption phase is marked by one critical truth: AI is already being used inside your organization, whether you know it or not.

Key Challenge: Lack of Visibility into Shadow AI

Most security and IT teams have limited or no visibility into which AI tools are being used, who is using them, and what data is being shared. Employees may be inputting sensitive information like PII, financials, or internal documents into AI tools that route data through third-party APIs or cloud providers.

Example Risk: An HR representative pastes candidate details into their personal ChatGPT, instead of using the corporate approved instance, to draft an offer letter, unknowingly sending personal data, including compensation figures and background information, to a third-party model provider without legal or security review.

IAM Risk: No Attribution or Identity Mapping

In this phase, AI tools operate outside the organization’s identity and access management (IAM) systems. There’s no consistent method for identifying:

  • Which users are using which tools

  • What data is being accessed or shared

  • Whether actions were performed by a human, a Copilot, or an AI service

Outcome: Without user-to-AI attribution, organizations face audit gaps, compliance violations, and a lack of accountability when issues arise.

Security Risk: Uncontrolled Access via OAuth and Plugins

Employees may install browser extensions or authorize third-party apps that request OAuth permissions with broad scopes that often grant full access to calendars, emails, cloud drives, and more.

Critical Risk: These integrations may operate with default or excessive permissions, creating blind spots and exposing data to exfiltration or insider threats.

Solution Focus: Discover and Contain

To reduce risk without stifling innovation, organizations must focus on discovery, containment, and foundational governance:

Discovery Tactics

  • Use CASBs, DNS logs, and endpoint, browser, and firewall telemetry to identify access to AI tools

  • Conduct interviews or surveys to understand team-level AI use cases

  • Tag common data types used with AI (i.e., HR text, marketing copy)

Early Security Controls

  • Enforce DLP and proxy filtering for known AI-related domains

  • Audit and revoke unsafe OAuth scopes and browser extensions

  • Begin tracking AI-linked service accounts and plugins

Governance Measures

  • Assign AI oversight responsibilities to IT and Security leaders

  • Draft an initial Acceptable Use Policy (AUP) for AI tools

  • Define boundaries for acceptable tools and data usage

  • Create clear identity requirements to ensure each agent has its own identity and access profile

Strategic Outcome

Ultimately, this phase is less about halting AI experimentation and more about creating visibility and laying the groundwork for structured growth. By identifying where AI is already in use and by implementing basic controls, organizations can reduce risk exposure and prepare for structured adoption in the next phase of their maturity journey. The goal is clear: enable AI innovation, securely.

Awakening to AI – Discovery and Shadow Use


An enterprise organization’s journey with AI often begins in the shadows. Without formal strategy or oversight, employees start adopting generative AI tools such as ChatGPT, GitHub Copilot, and browser-based assistants to boost their productivity. While these tools deliver quick wins, such as faster writing, coding, and task automation, they introduce hidden and unmanaged risks. This early, decentralized adoption phase is marked by one critical truth: AI is already being used inside your organization, whether you know it or not.


Key Challenge: Lack of Visibility into Shadow AI


Most security and IT teams have limited or no visibility into which AI tools are being used, who is using them, and what data is being shared. Employees may be inputting sensitive information like PII, financials, or internal documents into AI tools that route data through third-party APIs or cloud providers.


Example Risk: An HR representative pastes candidate details into their personal ChatGPT, instead of using the corporate approved instance, to draft an offer letter, unknowingly sending personal data, including compensation figures and background information, to a third-party model provider without legal or security review.

IAM Risk: No Attribution or Identity Mapping

In this phase, AI tools operate outside the organization’s identity and access management (IAM) systems. There’s no consistent method for identifying:

  • Which users are using which tools

  • What data is being accessed or shared

  • Whether actions were performed by a human, a Copilot, or an AI service

Outcome: Without user-to-AI attribution, organizations face audit gaps, compliance violations, and a lack of accountability when issues arise.


Security Risk: Uncontrolled Access via OAuth and Plugins

Employees may install browser extensions or authorize third-party apps that request OAuth permissions with broad scopes that often grant full access to calendars, emails, cloud drives, and more.


Critical Risk: These integrations may operate with default or excessive permissions, creating blind spots and exposing data to exfiltration or insider threats.


Solution Focus: Discover and Contain

To reduce risk without stifling innovation, organizations must focus on discovery, containment, and foundational governance:

Discovery Tactics

  • Use CASBs, DNS logs, and endpoint, browser, and firewall telemetry to identify access to AI tools

  • Conduct interviews or surveys to understand team-level AI use cases

  • Tag common data types used with AI (i.e., HR text, marketing copy)

Early Security Controls

  • Enforce DLP and proxy filtering for known AI-related domains

  • Audit and revoke unsafe OAuth scopes and browser extensions

  • Begin tracking AI-linked service accounts and plugins

Governance Measures

  • Assign AI oversight responsibilities to IT and Security leaders

  • Draft an initial Acceptable Use Policy (AUP) for AI tools

  • Define boundaries for acceptable tools and data usage

  • Create clear identity requirements to ensure each agent has its own identity and access profile

Strategic Outcome

Ultimately, this phase is less about halting AI experimentation and more about creating visibility and laying the groundwork for structured growth. By identifying where AI is already in use and by implementing basic controls, organizations can reduce risk exposure and prepare for structured adoption in the next phase of their maturity journey. The goal is clear: enable AI innovation, securely.

Awakening to AI – Discovery and Shadow Use

An enterprise organization’s journey with AI often begins in the shadows. Without formal strategy or oversight, employees start adopting generative AI tools such as ChatGPT, GitHub Copilot, and browser-based assistants to boost their productivity. While these tools deliver quick wins, such as faster writing, coding, and task automation, they introduce hidden and unmanaged risks. This early, decentralized adoption phase is marked by one critical truth: AI is already being used inside your organization, whether you know it or not.


Key Challenge: Lack of Visibility into Shadow AI

Most security and IT teams have limited or no visibility into which AI tools are being used, who is using them, and what data is being shared. Employees may be inputting sensitive information like PII, financials, or internal documents into AI tools that route data through third-party APIs or cloud providers.


Example Risk: An HR representative pastes candidate details into their personal ChatGPT, instead of using the corporate approved instance, to draft an offer letter, unknowingly sending personal data, including compensation figures and background information, to a third-party model provider without legal or security review.


IAM Risk: No Attribution or Identity Mapping


In this phase, AI tools operate outside the organization’s identity and access management (IAM) systems. There’s no consistent method for identifying:

  • Which users are using which tools

  • What data is being accessed or shared

  • Whether actions were performed by a human, a Copilot, or an AI service


Outcome: Without user-to-AI attribution, organizations face audit gaps, compliance violations, and a lack of accountability when issues arise.


Security Risk: Uncontrolled Access via OAuth and Plugins


Employees may install browser extensions or authorize third-party apps that request OAuth permissions with broad scopes that often grant full access to calendars, emails, cloud drives, and more.

Critical Risk: These integrations may operate with default or excessive permissions, creating blind spots and exposing data to exfiltration or insider threats.


Solution Focus: Discover and Contain

To reduce risk without stifling innovation, organizations must focus on discovery, containment, and foundational governance:


Discovery Tactics

  • Use CASBs, DNS logs, and endpoint, browser, and firewall telemetry to identify access to AI tools

  • Conduct interviews or surveys to understand team-level AI use cases

  • Tag common data types used with AI (i.e., HR text, marketing copy)


Early Security Controls

  • Enforce DLP and proxy filtering for known AI-related domains

  • Audit and revoke unsafe OAuth scopes and browser extensions

  • Begin tracking AI-linked service accounts and plugins

Governance Measures

  • Assign AI oversight responsibilities to IT and Security leaders

  • Draft an initial Acceptable Use Policy (AUP) for AI tools

  • Define boundaries for acceptable tools and data usage

  • Create clear identity requirements to ensure each agent has its own identity and access profile

Strategic Outcome

Ultimately, this phase is less about halting AI experimentation and more about creating visibility and laying the groundwork for structured growth. By identifying where AI is already in use and by implementing basic controls, organizations can reduce risk exposure and prepare for structured adoption in the next phase of their maturity journey. The goal is clear: enable AI innovation, securely.