How to Create an AI Policy for Your Company (Template Included)
Your employees are already using AI. Whether you've officially adopted it or not, someone on your team has pasted client data into ChatGPT, used AI to draft an email, or asked Gemini to help with a report.
That's not necessarily bad. But without an AI policy, you have no idea what data is being shared, what outputs are being published without review, or what risks you're quietly accumulating.
An AI policy isn't about control — it's about clarity. It tells your team: here's what's encouraged, here's what's off-limits, and here's how we do this responsibly.
Here's how to create one that works.
Why You Need an AI Policy Now (Not Later)
Three reasons most businesses can't afford to wait:
1. Data Leakage Is Already Happening
When an employee pastes a client contract into a free AI tool, that data may be used to train the model. Customer names, financial details, proprietary information — all potentially exposed. A policy sets clear boundaries on what data can and can't be used with AI tools.
2. Quality Control Gaps
AI-generated content can be confidently wrong. If someone uses AI to draft a client proposal and sends it without review, you've got accuracy issues wearing your company's name. A policy establishes review requirements.
3. Regulatory Reality
The EU AI Act is now in effect. GDPR applies to AI-processed personal data. Industry regulations (financial services, healthcare, legal) add more layers. Having a documented AI policy shows due diligence and reduces regulatory risk.
What Your AI Policy Should Cover
A good AI policy for a small-to-medium business doesn't need to be 50 pages. It needs to cover seven key areas clearly and concisely.
1. Approved Tools and Platforms
List which AI tools your company approves for business use. Be specific:
Example Section: Approved Tools
Approved for business use: ChatGPT Team/Enterprise, Microsoft Copilot (through our M365 subscription), Grammarly Business.
Approved for limited/personal use: Free tiers of ChatGPT, Claude, Gemini — but only with non-sensitive data.
Not approved: Any AI tool not listed above, until reviewed and approved by [designated person].
Why this matters: approved tools typically have business data agreements. Free consumer tools often don't.
2. Data Classification Rules
This is the most important section. Define what types of data can be used with AI:
Example Section: Data Rules
Never input into AI tools: Client personal data (names, addresses, financial info), employee personal data, passwords/credentials, confidential contracts, proprietary algorithms or trade secrets.
OK with approved tools only: Internal processes, general business questions, anonymized data, publicly available information, marketing draft content.
OK with any AI tool: General knowledge questions, formatting help, brainstorming on non-sensitive topics.
3. Human Review Requirements
Specify what AI outputs must be reviewed by a human before use:
- Always review: Anything going to a client, any external communication, any content published under the company name, any financial calculations, any legal or compliance-related content
- Review recommended: Internal documents, meeting summaries, draft processes
- No review needed: Personal productivity (brainstorming, outlining, grammar checking internal notes)
4. Disclosure Rules
When should you tell people AI was involved? This varies by industry and context:
- Always disclose: When clients ask directly. When AI is making or influencing decisions about people (hiring, credit, etc.).
- Consider disclosing: When AI substantially generated client-facing content. When it could affect trust or expectations.
- No disclosure needed: Internal productivity use. Grammar and style checking. Research assistance.
5. Accountability
This is simple but critical: the person who uses AI output is responsible for that output. AI doesn't take the blame when something goes wrong. If you use AI to draft a proposal and it contains an error, that's your error.
Make this explicit in the policy. It prevents the "but the AI said..." defense.
6. Training and Onboarding
Specify how employees learn about the policy:
- Include AI policy review in onboarding for new hires
- Annual review for all staff (or when the policy updates)
- Designate an AI point person for questions
7. Review Schedule
AI changes fast. Your policy should too. Set a review schedule:
- Quarterly: Quick check — any new tools to add/remove? Any incidents to address?
- Annually: Full policy review. Update for new regulations, tools, and company needs.
- As needed: After any AI-related incident or significant regulatory change.
How to Write It: Practical Steps
Step 1: Audit Current AI Use (1-2 days)
Before writing the policy, find out what's already happening. Send a quick, anonymous survey to your team:
- Which AI tools do you use for work?
- What tasks do you use them for?
- What types of data have you used with AI tools?
- How often do you use them?
The answers will probably surprise you. That's the point.
Step 2: Draft the Policy (1-2 days)
Using the seven sections above as your framework, write a first draft. Keep it under 3 pages. Use plain language — if your team can't understand it, they won't follow it.
Step 3: Get Feedback (1 week)
Share the draft with 3-4 people: at least one person from operations, one from any client-facing role, and ideally someone who's already an AI power user. Their feedback will catch blind spots.
Step 4: Finalize and Communicate (1-2 days)
Don't just email the policy and hope people read it. Present it in a team meeting. Explain the "why" behind each section. Make the AI point person available for questions.
Step 5: Make It Accessible (ongoing)
Put the policy somewhere people can actually find it. Pin it in Slack. Add it to your shared drive. Include it in your employee handbook. A policy nobody can find is a policy nobody follows.
Common Mistakes to Avoid
Being Too Restrictive
If your policy basically says "don't use AI," people will ignore it and use AI anyway — just secretly. That's worse than having no policy at all because now you have hidden risk.
Being Too Vague
"Use AI responsibly" is not a policy. People need specific guidance. What tools? What data? What review process? Vague policies create inconsistent behavior.
Forgetting to Update
A policy written in 2024 that references GPT-4 as cutting-edge is already outdated. Set that review schedule and stick to it.
Not Getting Buy-In
If the team feels the policy was imposed on them without input, compliance will be minimal. Involve them in creating it and they'll feel ownership over following it.
A Quick Template to Get You Started
Here's a simplified one-page version you can adapt right now:
[Company Name] AI Usage Policy — v1.0
Effective date: [Date]
Approved tools: [List your approved tools]
Data rules: Never input personal data, client confidential data, or credentials into any AI tool. Use only anonymized or public data with non-approved tools.
Review requirement: All AI-generated content must be reviewed by a human before external use. The reviewer is accountable for accuracy.
Disclosure: Disclose AI involvement when asked by clients or when AI influences decisions about people.
Point person: [Name] handles AI-related questions and tool approvals.
Review schedule: Quarterly check, annual full review.
That's a starting point. It's better than nothing, and you can build on it.
Want the Full AI Policy Template?
Our AI Adoption Starter Kit includes a complete, customizable AI policy template with section-by-section guidance, example language for different industries, and a team training guide. Plus an ROI calculator, implementation roadmap, and more — all for €27.
Get the Starter Kit → €27The Bottom Line
An AI policy isn't bureaucracy — it's protection. It protects your data, your reputation, your clients, and your team. And it doesn't have to be complicated.
Start with the audit. Write a simple draft. Get feedback. Communicate it clearly. Review it regularly.
Your team is already using AI. The only question is whether they're doing it with guardrails or without them. A clear policy makes sure it's with.