Skip to the main content.

Our client portal provides all the tools you need to create, view or update your support requests. 


For urgent IT support during business hours, or if you suspect anything suspicious call  01314528444 for the fastest response.


If one of our team has asked you to start a remote control session on your computer, use the remote control menu option above.

6 min read

AI at Work: Do You Know What’s Happening Behind the Scenes?

AI is being used in your business. You might not know about it, but it is. It's time to get it under control.

 

If you own or run a business, there’s a strong chance your team is already using AI tools — without telling you—and without realising they might be putting your company’s data at risk.

Multiple large studies show this is not a fringe issue. They identify that:

  • Over half of workers who use AI are doing so without approval
  • Many have never been trained on the safe use of AI
  • A significant share admits to pasting confidential information into public AI tools

That combination—unapproved tools, no training, and sensitive data—creates a perfect storm for accidental data leaks and compliance problems.

Before we get into solutions, here’s the headline you need to absorb: this is almost certainly happening in your business right now, even if you’ve not formally rolled out AI. This is called Shadow AI.

Let's explore what you need to do to get this under your control.


 

What's the extent of the shadow AI problem?

We've already exposed the uncomfortable truth that employees love AI and are using it. But to what extent are they using it without approval? Various studies have looked into this, and the results are worrying to say the least.

The takeaway from these studies is that if you don’t have a clear, communicated policy and an approved tool in place, shadow AI will fill the vacuum.

Why are staff using shadow AI?

Employees are increasingly turning to shadow AI due to the pressure to work faster, manage multiple responsibilities, and adapt to rapidly changing workflows. Many have become comfortable using AI in their personal lives, seeing it as a natural way to achieve more with fewer resources at work.

A significant number of employees believe that AI is essential for enhancing productivity and advancing their careers. As a result, they may circumvent unclear or slow approval processes to utilise these tools. The Salesforce study mentioned above indicates that slow policy rollouts and ambiguous guidelines are major reasons why people opt for shortcuts with consumer tools.

In the absence of clear guidance and easy-to-use, authorised solutions, public AI tools become attractive alternatives. They are often free, easily accessible, and promise to simplify daily tasks. Research consistently shows that workers feel AI enhances their productivity and effectiveness, prompting many to disregard official policies to get their work done if an appropriate tool isn't supplied to them.

Does banning AI at work stop people from using it?

If bans worked, we wouldn’t be talking about Shadow AI. And the studies cited above would be telling a different story.

The lesson is that just saying “no” without offering a safe, approved “yes” doesn't work. People will find workarounds. The winning strategy is to channel the behaviour with a compliant, business‑ready tool and clear, human‑friendly rules.

What’s the real risk of using shadow AI?

The biggest risk is ending up with an accidental data breach.

Every time someone pastes a contract, a customer email thread, pricing model, source code, or even meeting notes into a public AI tool, you risk losing control of that information. Similarly, many companies use AI meeting summary services. Not all of these protect your data; some use it to train their AIs, especially those aimed at consumers and not businesses.

Samsung famously learned the chatbot lesson the hard way. In 2023, employees pasted confidential source code into a public chatbot, prompting the company to restrict AI use. This is a classic example of accidental leakage through well‑meaning use.

American educational heavyweights Harvard Law have even gone as far as writing about the legal ramifications of feeding data into AI chatbots. They caution that feeding confidential materials into public chatbots can make that information accessible to the platform's staff, and more importantly, if the model trains on it, it could surface that data to other users later.

The follow-on risk from such action is, of course, a potential data leak and resulting loss of reputation, revenue, and a GDPR fine from the ICO.

Public Chatbots vs. Business‑Grade AI

A big misconception is that “AI is AI.” But as we've alluded to above, how an AI tool handles your data varies widely.

  • Public AI tools often use your inputs to improve their models by default (unless you change the setting), which is why privacy guidance stresses careful use for sensitive data. OpenAI, for example, lets users opt out of model training—but you must toggle it off unless you’re on enterprise‑grade plans. Many casual users don’t realise this. 
  • Enterprise‑grade AI (properly configured) provides clearer controls and commitments around data residency, access, retention, and model training. This reduces the risk that your inputs become someone else’s outputs.

Consumer AIs are probably safe to use for non‑sensitive tasks (e.g., brainstorming public blog topics), but they are not the right tool for customer data, pricing models, internal emails, or source code.

Staff are unlikely to make this distinction, and in the absence of an approved AI tool or a policy banning AI use, they will likely use what they are already familiar with and what's at hand.

AI platforms hosted in high‑risk jurisdictions

There’s something else to consider when worrying about shadow AI: where the AI tool stores data and what local laws apply.

Some platforms explicitly store data on servers in countries where authorities can compel access, creating serious concerns for intellectual property protection and regulatory compliance.

The Chinese AI platform DeepSeek states in its privacy policy that it collects information and stores it on servers in the People’s Republic of China. Experts have warned that this raises national security and data privacy concerns because Chinese authorities can legally compel data access.

For UK businesses subject to UK GDPR and customer confidentiality obligations, sending data to tools that store or process it in jurisdictions with different privacy regimes is particularly risky.

Why Microsoft Copilot solves the problem of shadow AI

If your employees are going to use AI (and they are), your best protection is to offer a safe, approved alternative and make it the easiest path. For many organisations running Microsoft 365, that means Microsoft Copilot.

Here’s what sets Copilot apart for business use:

  • Your data isn’t used to train foundation models. Prompts, responses, and data accessed via Microsoft Graph are not used to train Copilot’s underlying model. This means you eliminate the major risk of losing control of your data. It's all kept siloed in your Microsoft tenancy.
  • Tenant isolation and permissions. Copilot respects your existing identity and permissions, surfacing only what a given user can already access in Microsoft 365 (e.g., SharePoint, Teams, OneDrive).
  • Compliance and security. Copilot is fully GDPR compliant (including keeping your data in the UK). Microsoft 365’s security, including encryption in transit and at rest, and policy enforcement, is applied to your interactions.
  • Governance. Sensitivity labels, retention rules, DLP policies, and audit logs can apply to Copilot activity so the controls you use today carry forward into your AI usage.

In short, Copilot is designed to work within your existing security boundary, rather than shuttling your information to a public, consumer service. That’s why many organisations standardise on Copilot as their approved AI assistant.

A practical plan to get control of shadow AI

You don’t need to be technical to fix this. Focus on governance, enablement, and simple controls—in that order.

1) Run a quick assessment

  • Anonymously survey staff: ask if they're using AI and for what purposes
  • Scan: for shadow AI usage with your IT partner (browser logs, firewall data, or lightweight discovery tools). 

2) Publish a plain‑English AI usage policy

  • Draft and share an AI usage policy with you staff (IT Foundations customers have access to a free template AI policy on our IT portal!)

Keep it simple, note which systems can be used and for what (it's fine to just say "use only approved AI systems" and then keep a separate list of approved AI applications.)

Be sure to expressly prohibit pasting confidential information into public chatbots, and using an AI system based in a high-risk jurisdiction.

3) Approve a safe default: Microsoft Copilot

  • If you already use Microsoft 365, Copilot is the fastest route to a secure, integrated AI assistant.

Communicate the business reasons for the choice that's been made so staff understand why they can't just use ChaptGPT. Copilot does not train foundation models on your prompts/responses, it respects your permissions, and lives inside your tenant boundary.


4) Set baseline safeguards for AI implementation

  • Permissions hygiene: Review oversharing in SharePoint/OneDrive/Teams so Copilot only surfaces what people should see.
  • Sensitivity labels & DLP: Apply (or tighten) labels like “Confidential” and pair them with Data Loss Prevention rules so sensitive files can’t be shared externally or pasted into risky apps.
  • Audit & monitoring: Ensure audit logs are enabled for Copilot interactions and M365 activity so you can investigate concerns if needed.

5) Educate with short, practical guidance

  • 30‑minute sessions on “what’s safe to paste” and “good prompts” go a long way.
  • Reinforce the core message from respected authorities: public chatbots are not for confidential information; use the approved tool; and if in doubt, ask.

Next steps...?

You don’t have to become an AI expert to protect your business, and enable your staff. IT Foundations' team of Edinburgh based experts can help you implement AI in your business and remove the need for shadow AI.

 

What is the most secure way to share passwords with employees

3 min read

What is the most secure way to share passwords with employees

Breached or stolen passwords are linked to over 80% of data breaches. Hackers can get in using stolen, weak, or reused passwords. But passwords...

Read More
Best new features for small businesses of Microsoft 365 Copilot in 2025

Best new features for small businesses of Microsoft 365 Copilot in 2025

Learn about the new and useful features in Microsoft Copilot to help increase your productivity.

Read More
Help, I've got ransomware. What do I do next?

Help, I've got ransomware. What do I do next?

A high level plan for detecting, responding and recovering from ransomware.

Read More