Small businesses are adopting AI faster than they're securing it. That's not an accusation — it's arithmetic. AI tools deliver immediate productivity gains; security policies take time to write and enforce. The gap between adoption and governance is where breaches happen.
This checklist covers the 10 most critical AI security gaps we see in SMBs. Each item is actionable without a dedicated security team. Priority labels (High / Medium / Low) reflect what to address first if you're working through this incrementally.
How to use this: Work through each item with your leadership team. Mark what you have in place, what you're missing, and assign an owner for each gap. Don't try to close everything in one sprint — prioritize the High items first, then come back for Medium.
The 10-Point AI Security Checklist
AI Tool Inventory: Know What You're Actually Using
You cannot secure what you don't know exists. Before any other step, build a complete list of every AI tool in use — company-approved and employee-adopted. This includes browser extensions, coding assistants, writing tools, and any SaaS product with an AI feature. Survey your team anonymously; people disclose more when there's no punishment attached.
HIGH PRIORITYData Classification: Define What Can't Go Into AI Tools
Not all data carries the same risk. Classify your data into at minimum three tiers: public (safe for AI), internal (use with approved tools only), and restricted (never goes into any AI tool — customer PII, financial records, legal documents, health data). Make the classification visible and simple. "Never paste customer data into AI" is a rule employees can follow; a 40-page data governance policy isn't.
HIGH PRIORITYApproved AI Tools List: Make the Safe Path the Easy Path
Employees will use AI tools regardless of whether you have a policy. Your job is to make the approved, governed tools the default choice. Publish a short list of approved AI tools — preferably enterprise versions with data processing agreements, opt-out from training, and SOC 2 compliance. Make it easy to request additions. If the approval process takes two weeks, employees won't bother and will use whatever's convenient.
HIGH PRIORITYVendor Data Processing Agreements (DPAs)
Every AI vendor that processes your data — even indirectly through your employees' usage — should have a signed Data Processing Agreement. A DPA defines what data can be processed, how it's stored, whether it's used for training, and what happens on deletion. If you're subject to GDPR or handle EU customer data, DPAs aren't optional. Check your existing SaaS vendors too — many have added AI features that now fall under your DPA obligations.
HIGH PRIORITYAI Acceptable Use Policy (AUP)
A written policy that tells employees what they can and can't do with AI tools at work. Keep it short — two pages maximum. Cover: approved tools, prohibited data types, consequences of violations, and how to report concerns. A policy buried in an employee handbook doesn't count. It needs to be communicated directly, signed by employees, and referenced in onboarding. Update it quarterly as the tool landscape changes.
HIGH PRIORITYPrompt Injection Awareness for Customer-Facing AI
If you've deployed any AI tool that takes user input — a customer support chatbot, an AI search function, an intake form — it's a prompt injection target. Prompt injection attacks manipulate AI systems by embedding malicious instructions in user input. Review any customer-facing AI for input sanitization, output filtering, and privilege separation. This is especially critical if the AI has access to internal systems or can take actions on users' behalf.
MEDIUM PRIORITYAI Tool Access Controls and Offboarding
When an employee leaves, are their AI tool accounts disabled? Most companies have solid offboarding for core systems (email, Slack, GitHub) but miss the sprawl of AI tools that employees signed up for with their work email. If an employee had access to an AI tool trained on your company's documents, that access should be revoked the same day they leave. Audit your AI tool accounts quarterly against your current employee list.
MEDIUM PRIORITYAI-Specific Security Training
Annual security training should include AI-specific scenarios, not just phishing simulations. Cover: what shadow AI is and why it matters, how to identify data classification tier, what to do when they're unsure if something is safe to put into an AI tool, and how to report a potential AI-related data incident. Tailor examples to your industry — a healthcare employee needs different examples than a software developer.
MEDIUM PRIORITYIncident Response Plan for AI Data Exposure
What happens when — not if — an employee accidentally pastes restricted data into an unapproved AI tool? Do you have a documented process for assessing severity, notifying affected parties, and satisfying regulatory reporting obligations? Your existing incident response plan may not cover AI-specific scenarios. Add a specific runbook for AI data exposure incidents, including contact information for the AI vendor's DPA team and your legal counsel.
MEDIUM PRIORITYQuarterly AI Security Review
The AI tool landscape changes faster than any annual review cycle can track. Set a quarterly calendar event for a 60-minute AI security review: update your approved tools list, check for new AI features in existing SaaS tools, review any incidents or near-misses, and assess whether your data classification rules still fit your current business. This keeps security governance from falling behind your AI adoption curve.
ONGOINGHow to Prioritize If You're Starting From Zero
If none of these are in place today, don't try to do everything at once. Here's a practical 30-day sprint:
- Week 1: Items 1 and 2 — inventory and data classification. These are the foundation everything else builds on.
- Week 2: Items 3 and 5 — approved tools list and AUP. Communicate both directly to all employees.
- Week 3: Item 4 — DPAs. Work with your legal counsel or use a standard template for common vendors.
- Week 4: Items 7 and 9 — offboarding audit and incident response addition. These take a few hours each.
Items 6, 8, and 10 are ongoing. Schedule them once the foundational pieces are in place.
Common mistake: Treating this as a one-time project. AI security governance is a continuous function, not a box to check. The quarterly review (item 10) is the mechanism that keeps everything else from becoming stale.
What This Checklist Doesn't Cover
This checklist addresses governance and policy — the human side of AI security. It doesn't cover the technical layer: network monitoring for AI traffic, endpoint controls, or security architecture for AI systems you build yourself. Those require a deeper technical assessment.
If you've completed this checklist and want to go deeper, read our guide on shadow AI risks for the threat model behind items 1-3, or take the free AICyberNav assessment for a personalized analysis of your specific stack and team.
Get Your Personalized AI Security Assessment
This checklist tells you what to check. AICyberNav's free assessment tells you what's actually broken in your specific environment — your tools, your team, your data types, your compliance obligations.
Take the Free Assessment →