Small businesses are adopting AI faster than they're securing it. That's not an accusation — it's arithmetic. AI tools deliver immediate productivity gains; security policies take time to write and enforce. The gap between adoption and governance is where breaches happen.

This checklist covers the 10 most critical AI security gaps we see in SMBs. Each item is actionable without a dedicated security team. Priority labels (High / Medium / Low) reflect what to address first if you're working through this incrementally.

How to use this: Work through each item with your leadership team. Mark what you have in place, what you're missing, and assign an owner for each gap. Don't try to close everything in one sprint — prioritize the High items first, then come back for Medium.

The 10-Point AI Security Checklist

01

AI Tool Inventory: Know What You're Actually Using

You cannot secure what you don't know exists. Before any other step, build a complete list of every AI tool in use — company-approved and employee-adopted. This includes browser extensions, coding assistants, writing tools, and any SaaS product with an AI feature. Survey your team anonymously; people disclose more when there's no punishment attached.

HIGH PRIORITY
02

Data Classification: Define What Can't Go Into AI Tools

Not all data carries the same risk. Classify your data into at minimum three tiers: public (safe for AI), internal (use with approved tools only), and restricted (never goes into any AI tool — customer PII, financial records, legal documents, health data). Make the classification visible and simple. "Never paste customer data into AI" is a rule employees can follow; a 40-page data governance policy isn't.

HIGH PRIORITY
03

Approved AI Tools List: Make the Safe Path the Easy Path

Employees will use AI tools regardless of whether you have a policy. Your job is to make the approved, governed tools the default choice. Publish a short list of approved AI tools — preferably enterprise versions with data processing agreements, opt-out from training, and SOC 2 compliance. Make it easy to request additions. If the approval process takes two weeks, employees won't bother and will use whatever's convenient.

HIGH PRIORITY
04

Vendor Data Processing Agreements (DPAs)

Every AI vendor that processes your data — even indirectly through your employees' usage — should have a signed Data Processing Agreement. A DPA defines what data can be processed, how it's stored, whether it's used for training, and what happens on deletion. If you're subject to GDPR or handle EU customer data, DPAs aren't optional. Check your existing SaaS vendors too — many have added AI features that now fall under your DPA obligations.

HIGH PRIORITY
05

AI Acceptable Use Policy (AUP)

A written policy that tells employees what they can and can't do with AI tools at work. Keep it short — two pages maximum. Cover: approved tools, prohibited data types, consequences of violations, and how to report concerns. A policy buried in an employee handbook doesn't count. It needs to be communicated directly, signed by employees, and referenced in onboarding. Update it quarterly as the tool landscape changes.

HIGH PRIORITY
06

Prompt Injection Awareness for Customer-Facing AI

If you've deployed any AI tool that takes user input — a customer support chatbot, an AI search function, an intake form — it's a prompt injection target. Prompt injection attacks manipulate AI systems by embedding malicious instructions in user input. Review any customer-facing AI for input sanitization, output filtering, and privilege separation. This is especially critical if the AI has access to internal systems or can take actions on users' behalf.

MEDIUM PRIORITY
07

AI Tool Access Controls and Offboarding

When an employee leaves, are their AI tool accounts disabled? Most companies have solid offboarding for core systems (email, Slack, GitHub) but miss the sprawl of AI tools that employees signed up for with their work email. If an employee had access to an AI tool trained on your company's documents, that access should be revoked the same day they leave. Audit your AI tool accounts quarterly against your current employee list.

MEDIUM PRIORITY
08

AI-Specific Security Training

Annual security training should include AI-specific scenarios, not just phishing simulations. Cover: what shadow AI is and why it matters, how to identify data classification tier, what to do when they're unsure if something is safe to put into an AI tool, and how to report a potential AI-related data incident. Tailor examples to your industry — a healthcare employee needs different examples than a software developer.

MEDIUM PRIORITY
09

Incident Response Plan for AI Data Exposure

What happens when — not if — an employee accidentally pastes restricted data into an unapproved AI tool? Do you have a documented process for assessing severity, notifying affected parties, and satisfying regulatory reporting obligations? Your existing incident response plan may not cover AI-specific scenarios. Add a specific runbook for AI data exposure incidents, including contact information for the AI vendor's DPA team and your legal counsel.

MEDIUM PRIORITY
10

Quarterly AI Security Review

The AI tool landscape changes faster than any annual review cycle can track. Set a quarterly calendar event for a 60-minute AI security review: update your approved tools list, check for new AI features in existing SaaS tools, review any incidents or near-misses, and assess whether your data classification rules still fit your current business. This keeps security governance from falling behind your AI adoption curve.

ONGOING

How to Prioritize If You're Starting From Zero

If none of these are in place today, don't try to do everything at once. Here's a practical 30-day sprint:

Items 6, 8, and 10 are ongoing. Schedule them once the foundational pieces are in place.

Common mistake: Treating this as a one-time project. AI security governance is a continuous function, not a box to check. The quarterly review (item 10) is the mechanism that keeps everything else from becoming stale.

What This Checklist Doesn't Cover

This checklist addresses governance and policy — the human side of AI security. It doesn't cover the technical layer: network monitoring for AI traffic, endpoint controls, or security architecture for AI systems you build yourself. Those require a deeper technical assessment.

If you've completed this checklist and want to go deeper, read our guide on shadow AI risks for the threat model behind items 1-3, or take the free AICyberNav assessment for a personalized analysis of your specific stack and team.

Get Your Personalized AI Security Assessment

This checklist tells you what to check. AICyberNav's free assessment tells you what's actually broken in your specific environment — your tools, your team, your data types, your compliance obligations.

Take the Free Assessment →
10 minutes. No signup. Actionable results, not a sales pitch.