The AI risks your insurance broker will ask about.
Shadow AI, data leakage into public language models, and the AI usage policy your team will actually read. The risks every North West SME should have a position on before the next renewal conversation.
The real AI risk in most SMEs isn’t the technology. It’s the people.
Right now, somewhere in your business, someone is pasting a CV into ChatGPT to ask for an interview question list. Someone else is dropping a set of financial figures into a free AI tool to ask for a summary. A third person is feeding a confidential client email into Claude to draft a polite reply. None of them think they’re doing anything wrong. All three are creating risks your business hasn’t accounted for.
Free, public AI tools are not built for handling business data safely. Some store prompts indefinitely. Some use them to train future versions of the model. Some have had public security incidents where one user’s data appeared in another user’s session. The detail varies by tool and by plan, and the rules change every few months.
Your job as a director isn’t to ban AI tools. It’s to make sure your team know what’s safe and what isn’t, and to have a documented position you can show your insurance broker, your auditor, and any client who asks. That’s what this page covers.
Five AI security risks every SME should have a position on.
Not every SME will have all five. Most will have at least three. We work through each with clients during a security review and produce a single-page summary.
Shadow AI
Tools your team uses that you don’t know about.
The most common AI risk in SMEs. A staff member signs up to a free AI service with their work email, starts using it for daily tasks, and never tells anyone. The business has no record of what data has been shared, no control over the account, and no way to revoke access when the person leaves. Shadow AI typically affects 60 to 80 per cent of staff in SMEs without a policy.
Data retention by free AI tools
Your prompts may be stored, reviewed and reused.
Free tiers of public AI tools typically retain user prompts for at least 30 days, often longer, and may use them to train future versions of the model. That means a financial spreadsheet pasted into ChatGPT free could, in principle, surface in another user’s response months later. Paid business tiers handle this differently, but most staff are using the free version.
AI-generated phishing
Better written, better targeted, harder to spot.
The same AI tools your team find useful are being used by attackers to write better phishing emails. Gone are the days of obvious typos and broken English. AI-generated phishing emails reference real internal projects, mimic individual writing styles, and arrive at convincing times. Staff training and email security tooling both need to keep up.
Copilot over-permissioning
If staff can see it, Copilot can quote it.
If your SharePoint permissions are set so that “everyone in the organisation” can view sensitive folders, Copilot will use that data when answering any user’s question. A junior staff member could ask Copilot a casual question and receive a response based on board minutes or salary data they were never meant to see. Tightening permissions is a Copilot prerequisite, not a nice-to-have.
Loss of audit trail
When AI writes the email, who is accountable?
If a member of staff sends a client a Copilot-drafted email containing a wrong figure, your audit trail shows the staff member sent it. From a regulatory and contractual perspective, the accountability is theirs. Most SMEs have no internal guidance on when AI-drafted content needs to be reviewed before sending, and that gap is becoming an audit issue.
The five sections of an AI usage policy that actually works.
You don’t need a 40-page AI governance framework. You need a two-page document that staff will actually read. Here’s what it should cover.
Approved tools list.
A short, clear list of which AI tools are permitted for business use. For most SMEs this is Microsoft 365 Copilot for licensed users, and a paid business tier of one chat-based AI for general use. Anything else requires sign-off.
What you can paste, what you can’t.
Plain English examples. Drafting a generic blog post: fine. Pasting client data, financial figures, staff details, contracts or anything marked confidential: not fine. The rule of thumb is “if you wouldn’t email it to a stranger, don’t paste it into AI”.
Verification rules for AI output.
When AI-drafted content needs human review before sending. For external client communications, always. For internal first drafts, optional. For anything involving numbers, contracts or commitments, always. Make this a one-paragraph rule, not a five-page process.
Who to ask if unsure.
One named person staff can email or message when they’re not sure if a use case is OK. Usually the owner, MD or operations lead. Make this person visible. The whole policy fails if staff have to guess.
Review cadence.
The technology shifts every quarter. New tools launch, pricing changes, and risks evolve. The policy should commit to a six-monthly review with whoever drafted it, so it doesn’t become a museum piece sitting in a folder no one opens.
An AI security review, then a clear plan.
We don’t sell AI security as a separate product. It’s part of how we look after your IT. A typical AI security engagement looks like this.
Half a day talking to you and three or four members of your team about what AI tools they’re already using, what they paste, what concerns them. We then map that against your existing M365 setup, security tooling and Cyber Essentials Plus framework. The output is a single-page summary of where you are and what to fix first.
Most clients are surprised by what we find. Not because there’s a disaster, but because there’s a lot of small, fixable issues no one had noticed.
Two related questions worth answering at the same time.
AI security doesn’t sit in isolation. Read these alongside this guide.
Copilot is the AI tool with the cleanest data handling for SMEs, but only if your SharePoint permissions are sensible. The honest guide to what it costs and where it pays off.
Read the Copilot guide →
Let’s get you a position on AI security you can actually defend.
An AI security review for a typical 30-person SME takes about a week of elapsed time and produces a single-page summary, a two-page usage policy, and a list of fixes prioritised by impact. No alarming PDFs, no theatre, no scaremongering.