All systems operational · Ormskirk, North West England

AI Security & Data Risks for SMEs

AI Services Security Risks
AI Security & Data Risks · SME Guide

The AI risks your insurance broker will ask about.

Shadow AI, data leakage into public language models, and the AI usage policy your team will actually read. The risks every North West SME should have a position on before the next renewal conversation.

ISO 27001 & Cyber Essentials Plus. AI risk frameworks built on certified foundations.

Practical, not theoretical. AI usage policies your team will read in five minutes, not forty pages.

Insurance-ready. Documented controls your cyber insurance broker will actually accept.
// Shadow AI Risk · Live
Active
Typical SME exposure level
High/ unmanaged
Most SMEs have staff using public AI tools without any policy in place.
Staff using ChatGPT for work tasks
Staff aware of data retention rules
Businesses with an AI usage policy
SMEs with documented controls
The honest opening

The real AI risk in most SMEs isn’t the technology. It’s the people.

Right now, somewhere in your business, someone is pasting a CV into ChatGPT to ask for an interview question list. Someone else is dropping a set of financial figures into a free AI tool to ask for a summary. A third person is feeding a confidential client email into Claude to draft a polite reply. None of them think they’re doing anything wrong. All three are creating risks your business hasn’t accounted for.

Free, public AI tools are not built for handling business data safely. Some store prompts indefinitely. Some use them to train future versions of the model. Some have had public security incidents where one user’s data appeared in another user’s session. The detail varies by tool and by plan, and the rules change every few months.

Your job as a director isn’t to ban AI tools. It’s to make sure your team know what’s safe and what isn’t, and to have a documented position you can show your insurance broker, your auditor, and any client who asks. That’s what this page covers.

The risk register

Five AI security risks every SME should have a position on.

Not every SME will have all five. Most will have at least three. We work through each with clients during a security review and produce a single-page summary.

01

Shadow AI

Tools your team uses that you don’t know about.

The most common AI risk in SMEs. A staff member signs up to a free AI service with their work email, starts using it for daily tasks, and never tells anyone. The business has no record of what data has been shared, no control over the account, and no way to revoke access when the person leaves. Shadow AI typically affects 60 to 80 per cent of staff in SMEs without a policy.

02

Data retention by free AI tools

Your prompts may be stored, reviewed and reused.

Free tiers of public AI tools typically retain user prompts for at least 30 days, often longer, and may use them to train future versions of the model. That means a financial spreadsheet pasted into ChatGPT free could, in principle, surface in another user’s response months later. Paid business tiers handle this differently, but most staff are using the free version.

03

AI-generated phishing

Better written, better targeted, harder to spot.

The same AI tools your team find useful are being used by attackers to write better phishing emails. Gone are the days of obvious typos and broken English. AI-generated phishing emails reference real internal projects, mimic individual writing styles, and arrive at convincing times. Staff training and email security tooling both need to keep up.

04

Copilot over-permissioning

If staff can see it, Copilot can quote it.

If your SharePoint permissions are set so that “everyone in the organisation” can view sensitive folders, Copilot will use that data when answering any user’s question. A junior staff member could ask Copilot a casual question and receive a response based on board minutes or salary data they were never meant to see. Tightening permissions is a Copilot prerequisite, not a nice-to-have.

05

Loss of audit trail

When AI writes the email, who is accountable?

If a member of staff sends a client a Copilot-drafted email containing a wrong figure, your audit trail shows the staff member sent it. From a regulatory and contractual perspective, the accountability is theirs. Most SMEs have no internal guidance on when AI-drafted content needs to be reviewed before sending, and that gap is becoming an audit issue.

What your policy should cover

The five sections of an AI usage policy that actually works.

You don’t need a 40-page AI governance framework. You need a two-page document that staff will actually read. Here’s what it should cover.

01

Approved tools list.

A short, clear list of which AI tools are permitted for business use. For most SMEs this is Microsoft 365 Copilot for licensed users, and a paid business tier of one chat-based AI for general use. Anything else requires sign-off.

2-3
Tools approved
Clear
Sign-off path
02

What you can paste, what you can’t.

Plain English examples. Drafting a generic blog post: fine. Pasting client data, financial figures, staff details, contracts or anything marked confidential: not fine. The rule of thumb is “if you wouldn’t email it to a stranger, don’t paste it into AI”.

5+
Worked examples
Plain
English only
03

Verification rules for AI output.

When AI-drafted content needs human review before sending. For external client communications, always. For internal first drafts, optional. For anything involving numbers, contracts or commitments, always. Make this a one-paragraph rule, not a five-page process.

Always
Client comms
Always
Contracts
04

Who to ask if unsure.

One named person staff can email or message when they’re not sure if a use case is OK. Usually the owner, MD or operations lead. Make this person visible. The whole policy fails if staff have to guess.

1 person
Named contact
Same day
Response target
05

Review cadence.

The technology shifts every quarter. New tools launch, pricing changes, and risks evolve. The policy should commit to a six-monthly review with whoever drafted it, so it doesn’t become a museum piece sitting in a folder no one opens.

6 months
Review cycle
Owned
By name
How we help

An AI security review, then a clear plan.

We don’t sell AI security as a separate product. It’s part of how we look after your IT. A typical AI security engagement looks like this.

Half a day talking to you and three or four members of your team about what AI tools they’re already using, what they paste, what concerns them. We then map that against your existing M365 setup, security tooling and Cyber Essentials Plus framework. The output is a single-page summary of where you are and what to fix first.

Most clients are surprised by what we find. Not because there’s a disaster, but because there’s a lot of small, fixable issues no one had noticed.

Shadow AI auditIdentify which AI tools your team are quietly using and what data has been shared.
SharePoint permissions reviewFind and fix the over-permissive folders that would leak through Copilot.
AI usage policy draftingA two-page document tailored to your business, written in plain English.
Staff briefing sessionOne hour, in person or on Teams, walking your team through what’s safe and what isn’t.
AI-aware phishing trainingUpdate your security awareness training to cover AI-generated phishing patterns.
Insurance documentationDocumented controls in the format your cyber insurance broker actually wants.
Related guides

Two related questions worth answering at the same time.

AI security doesn’t sit in isolation. Read these alongside this guide.

01
Microsoft 365 Copilot for SMEs

Copilot is the AI tool with the cleanest data handling for SMEs, but only if your SharePoint permissions are sensible. The honest guide to what it costs and where it pays off.

Copilot M365 Pricing

Read the Copilot guide →

02
ChatGPT vs Copilot vs Claude

If you’re going to approve a chat-based AI tool for business use, which one and on which plan? An honest comparison from an MSP perspective with security implications laid out.

ChatGPT Copilot Claude

Read the comparison →

03
AI Services Overview

The full Blowfish position on AI for North West SMEs. The five pillars of AI readiness and how we help, including the wider context this security guide sits inside.

Pillar Readiness Overview

Back to AI Services →

Get in touch

Let’s get you a position on AI security you can actually defend.

An AI security review for a typical 30-person SME takes about a week of elapsed time and produces a single-page summary, a two-page usage policy, and a list of fixes prioritised by impact. No alarming PDFs, no theatre, no scaremongering.

Talk to us
Find us
Unit 2, Hattersley Court, Ormskirk, L39 2AY