Shadow AI Security: How to Audit and Control Unsanctioned AI Tool Usage

Shadow AI Security: How to Audit and Control Unsanctioned AI Tool Usage
image source: https://unsplash.com/photos/a-piece-of-cardboard-with-a-keyboard-appearing-through-it-vi1HXPw6hyw

It starts small and innocent.

Someone uses ChatGPT to polish a difficult client email. A marketing team member enables an AI writing assistant to speed up blog drafts. An engineer pastes code into an AI tool to debug faster. A salesperson feeds prospect information into an AI chatbot for email suggestions.

Then it becomes routine. And once it's routine, nobody thinks about what data they're sharing or where it's going.

That's shadow AI—and it's creating data governance gaps your business may not even know exist.

At Lewis IT, we help Maryland businesses understand that shadow AI isn't about employees being reckless. It's about people trying to work faster with powerful tools they don't fully understand—and inadvertently exposing sensitive data in the process.

According to recent research, 38% of employees admit they've shared sensitive work information with AI tools without permission. They're not malicious. They're just trying to be productive.

As Microsoft emphasizes in its guidance on preventing data leaks to shadow AI: this is a data leak problem, not a productivity problem. The core risk is simple—employees use AI tools without oversight, and sensitive data ends up outside the controls you rely on for governance and compliance.

What Is Shadow AI Security?

Shadow AI is the unsanctioned use of AI tools without IT approval or oversight, driven by speed and convenience.

Why it matters in 2026: AI isn't just standalone tools employees choose to use—it's increasingly embedded directly into applications you already rely on. It expands through plug-ins, extensions, and third-party "copilots" that can access business data with minimal friction.

The challenge: The "helpful shortcut" becomes a blind spot when IT can't see what's being used, by whom, or with what data.

Where Shadow AI Hides in Your Organization

Lewis IT discovers shadow AI in these common places:

Marketing:

  • AI writing assistants for content creation
  • Image generation tools fed with brand materials
  • AI-powered social media schedulers accessing customer data

Sales:

  • AI email composers using prospect information
  • Chatbots trained on customer conversations
  • Proposal generators with pricing and client details

HR:

  • Resume screening tools processing applicant information
  • Performance review AI accessing employee data
  • Onboarding chatbots with confidential company information

Engineering:

  • Code completion tools learning from proprietary code
  • AI debugging assistants analyzing sensitive logic
  • Documentation generators accessing internal systems

Support:

  • Customer service AI trained on support tickets
  • Chatbots accessing customer account information
  • Automated response tools with PII access

The common thread: Well-intentioned employees using AI to work faster, unaware they're creating data exposure.

The Two Ways Shadow AI Security Fails

Failure 1: Zero Visibility—You Don't Know What's Being Used

Shadow AI isn't always a shiny new app someone signs up for. It can be:

  • AI add-on enabled inside existing platform (Microsoft Copilot, Google Gemini)
  • Browser extension (Grammarly, ChatGPT Chrome extension)
  • Feature that only shows up for certain users
  • Third-party integration buried in settings

The problem: AI usage spreads without a clear "moment" where IT would review or approve it.

Lewis IT's perspective: This is a visibility problem first. If you can't reliably discover where AI is being used, you can't apply consistent controls.

Failure 2: Visibility Without Control—You Know It's Happening But Can't Manage It

Even when you can name the tools, shadow AI security fails if you can't enforce consistent behavior.

This happens when AI activity:

  • Lives outside managed identity systems (personal accounts, not SSO)
  • Bypasses normal logging (no audit trail)
  • Isn't governed by clear policy defining acceptable use

The result: "Known unknowns"—people assume it's happening, but no one can document, standardize, or control it.

The governance nightmare: You lose confidence in where data flows and how it's being used across workflows and third parties.

The Purpose Creep Problem

Lewis IT emphasizes an overlooked risk: It's not just which tool someone used—it's what that tool continues to do with the data over time.

Purpose creep: When data begins to be used in ways that no longer align with its original purpose, disclosures, or agreements.

Example: Employee pastes customer support conversation into AI tool to draft response. AI provider uses that conversation to train their model. Your customer data is now permanently embedded in AI that serves all their customers—including your competitors.

The Lewis IT Shadow AI Audit Framework

A shadow AI audit should feel like routine maintenance, not a crackdown. The goal: gain clarity quickly, reduce significant risks first, keep the team productive.

Step 1: Discover Usage Without Disruption

Lewis IT approach: Review signals you already have before sending company-wide interrogations.

Practical discovery sources:

Identity and Authentication Logs:

  • Who is signing into which tools?
  • Are accounts managed (SSO) or personal?
  • What AI services appear in authentication logs?

Browser and Endpoint Telemetry:

  • What extensions are installed on managed devices?
  • Which websites are frequently visited?
  • What cloud services are being accessed?

SaaS Admin Settings:

  • Which AI features are enabled in Microsoft 365, Google Workspace, Slack?
  • What third-party integrations are connected?
  • Which users have activated AI assistants?

Network Traffic Analysis:

  • Connections to known AI service endpoints (OpenAI, Anthropic, Google AI)
  • Data upload patterns to cloud AI services
  • API calls to AI platforms

Non-Judgmental Self-Report:

Instead of: "Report all unauthorized AI usage immediately" Try: "What AI tools or features are helping you save time right now? We want to support them safely."

Why this works: Shadow AI is often adopted for productivity, not to bypass security. You'll get better answers with supportive framing.

Lewis IT implementation: We deploy discovery tools that identify AI usage without disrupting workflows, providing visibility before enforcement.

Step 2: Map AI to Real Workflows

Don't obsess over tool names. Map where AI touches actual work.

Lewis IT workflow mapping template:

WorkflowAI TouchpointInput TypeOutput UseOwner
Customer supportChatGPT for response draftingSupport tickets with PIIEmail responses to customersSupport team
Marketing contentJasper AIBlog topics, brand guidelinesPublished contentMarketing
Sales outreachAI email assistantProspect names, company infoProspecting emailsSales team
Code developmentGitHub CopilotProprietary source codeProduction codeEngineering

Why mapping matters: Understanding context helps determine risk—AI processing public blog topics is very different from AI processing customer PII.

Step 3: Classify Data Being Fed to AI

Lewis IT data classification (simple buckets your team can apply):

Public:

  • Information already published or intended for public consumption
  • Minimal risk from AI exposure
  • Example: Published blog posts, public product information

Internal:

  • Business information not intended for external sharing
  • Moderate risk—embarrassing if exposed but not catastrophic
  • Example: Internal memos, project plans, employee directory

Confidential:

  • Sensitive business information requiring protection
  • High risk—competitive damage, business impact if exposed
  • Example: Financial data, strategic plans, customer lists, pricing

Regulated:

  • Data subject to compliance requirements
  • Critical risk—regulatory fines, legal liability if exposed
  • Example: PHI (HIPAA), PII (GDPR/CCPA), payment data (PCI DSS)

Lewis IT guidance: If employees can't easily classify data, provide simple decision trees or examples for common scenarios.

Step 4: Triage Risk Quickly

You're not creating a perfect inventory—you're identifying highest risks right now.

Lewis IT risk scoring model:

High Risk (Address Immediately):

  • Regulated data in AI tools
  • Personal accounts (not SSO/managed) accessing business data
  • No retention/training controls available
  • No audit logging
  • Data can be shared or exported freely

Medium Risk (Address Within 30 Days):

  • Confidential data in AI tools
  • Managed accounts but unclear data handling
  • Limited logging available
  • Some control over retention

Low Risk (Monitor and Review):

  • Internal or public data only
  • Managed accounts with SSO
  • Clear data handling policies
  • Audit logs available
  • No sensitive data exposure

If you keep this step lightweight, you avoid analyzing everything and fixing nothing.

Step 5: Decide on Outcomes and Enforce

Make decisions that are easy to follow and easy to enforce:

Approved ✅

  • Permitted for defined use cases
  • Must use managed identity (SSO)
  • Logging enabled wherever possible
  • Example: Microsoft Copilot for Microsoft 365 users, data stays within tenant

Restricted ⚠️

  • Allowed only for low-risk inputs
  • No sensitive data permitted
  • Training on user communications with AI tool
  • Example: ChatGPT for public content drafting only

Replaced 🔄

  • Transition workflow to approved alternative
  • Provide supported tool with proper controls
  • Example: Replace public ChatGPT with enterprise ChatGPT with data controls

Blocked 🚫

  • Poses unacceptable risk
  • No workable controls available
  • Example: AI tools requiring upload of regulated data without BAA or DPA

Lewis IT enforcement mechanisms:

  • Technical controls (block unapproved AI domains at firewall/proxy)
  • Conditional access policies (require managed devices for approved AI)
  • DLP policies (prevent sensitive data paste into unapproved tools)
  • User training (explain why controls exist, provide approved alternatives)

Implementing Ongoing Shadow AI Governance

Lewis IT helps clients transform one-time audits into continuous governance programs.

Technical Controls

Data Loss Prevention (DLP):

  • Detect sensitive data being pasted into browser-based AI tools
  • Block or warn when regulated data enters unapproved AI
  • Monitor file uploads to AI services

Network Controls:

  • Web filtering blocking unapproved AI services
  • DNS filtering preventing access to risky AI domains
  • Firewall rules limiting AI service connections

Endpoint Controls:

  • Block installation of unauthorized browser extensions
  • Prevent unapproved application installation
  • Monitor clipboard activity for sensitive data

Identity and Access Management:

  • Require SSO for approved AI tools (visibility and control)
  • Conditional access requiring managed devices
  • Monitor for personal account usage of business AI tools

Policy and Training

Clear AI Usage Policy:

  • What AI tools are approved for which use cases
  • What data can and cannot be shared with AI
  • How to request approval for new AI tools
  • Consequences for policy violations

Regular Security Awareness Training:

  • Why shadow AI creates risks
  • How to classify data before sharing with AI
  • Approved AI tools and how to access them
  • Real-world examples of shadow AI incidents

Easy Approval Process:

  • Simple form to request new AI tool evaluation
  • Fast turnaround (days, not weeks)
  • Clear criteria for approval/denial
  • IT works with requesters to find approved alternatives

Continuous Monitoring

Quarterly Shadow AI Audits:

  • Re-run discovery process
  • Identify new AI adoption
  • Review and update approved tool list
  • Measure policy compliance

Regular Risk Reviews:

  • Assess approved AI tools for changes in data handling
  • Review vendor security practices
  • Monitor for new AI features in existing tools
  • Update risk classifications as needed

The Business Case for Shadow AI Security

Lewis IT helps business leaders understand why shadow AI governance matters beyond compliance.

Data Breach Prevention

Risk: Employee pastes customer database into ChatGPT for analysis. Data breach notification required for thousands of customers.

Cost avoidance: $150-200 per affected individual for notification, credit monitoring, legal fees, regulatory fines.

Intellectual Property Protection

Risk: Engineer uses AI coding assistant on proprietary algorithms. Code becomes part of AI training data, potentially exposed to competitors.

Cost avoidance: Loss of competitive advantage, IP theft, years of R&D compromised.

Regulatory Compliance

Risk: Healthcare employee uses AI tool to draft patient communication, exposing PHI without Business Associate Agreement.

Cost avoidance: HIPAA violations ($100-$50,000 per violation), OCR investigation, corrective action plans.

Competitive Intelligence Protection

Risk: Sales team uses AI to analyze competitor pricing, inadvertently sharing your pricing strategy in prompts.

Cost avoidance: Strategic information leakage, competitive disadvantage.

Common Shadow AI Mistakes to Avoid

Mistake 1: Blocking All AI

  • Problem: Drives usage further underground, reduces productivity
  • Solution: Provide approved alternatives with proper controls

Mistake 2: Policy Without Enforcement

  • Problem: Rules exist but aren't technically enforced
  • Solution: Implement DLP, web filtering, and monitoring

Mistake 3: One-Time Audit

  • Problem: AI landscape changes rapidly, quarterly audits become stale
  • Solution: Continuous monitoring with quarterly re-audits

Mistake 4: Assuming Employees Understand Risk

  • Problem: People don't realize AI providers may train on their data
  • Solution: Regular training with concrete examples

Mistake 5: No Approved Alternatives

  • Problem: Block shadow AI without providing supported options
  • Solution: Deploy enterprise AI tools with data controls

Take Control of Shadow AI in Your Organization

Shadow AI security isn't about shutting down innovation. It's about ensuring sensitive data doesn't flow into tools you can't monitor, govern, or defend.

Lewis IT helps Maryland businesses gain visibility into AI usage, assess risk, implement controls, and provide approved alternatives—enabling productivity without compromising security.

Don't let shadow AI become a blind spot in your security program.

Audit Your Shadow AI Risks: Contact Lewis IT

Ready to discover what AI tools your team is using and secure them properly? Lewis IT offers complimentary shadow AI assessments identifying unsanctioned AI usage and providing a roadmap for governance.

We'll review your environment, discover AI adoption patterns, classify risks, and recommend practical controls that support productivity while protecting sensitive data.

Email: info@lewisit.io
Phone: 240-784-1221
Website: www.lewisit.io/contact-us

Shadow AI is happening in your organization right now. Contact Lewis IT today and transform it from security gap to governed capability.


Lewis IT provides comprehensive AI governance and security services for businesses throughout Maryland and the Mid-Atlantic region. From shadow AI discovery and risk assessment to policy development, technical controls, and ongoing monitoring, we help organizations embrace AI innovation safely and securely.

Subscribe to Lewis IT Bin

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe
DigitalOcean Referral Badge