3 min read
What is the most secure way to share passwords with employees
Breached or stolen passwords are linked to over 80% of data breaches. Hackers can get in using stolen, weak, or reused passwords. But passwords...
Our client portal provides all the tools you need to create, view or update your support requests.
For urgent IT support during business hours, or if you suspect anything suspicious call 01314528444 for the fastest response.
If one of our team has asked you to start a remote control session on your computer, use the remote control menu option above.
6 min read
itfoundations
Originally posted on September 29, 2025
Last updated on September 29, 2025
If you own or run a business, there’s a strong chance your team is already using AI tools — without telling you—and without realising they might be putting your company’s data at risk.
Multiple large studies show this is not a fringe issue. They identify that:
That combination—unapproved tools, no training, and sensitive data—creates a perfect storm for accidental data leaks and compliance problems.
Before we get into solutions, here’s the headline you need to absorb: this is almost certainly happening in your business right now, even if you’ve not formally rolled out AI. This is called Shadow AI.
Let's explore what you need to do to get this under your control.
We've already exposed the uncomfortable truth that employees love AI and are using it. But to what extent are they using it without approval? Various studies have looked into this, and the results are worrying to say the least.
The takeaway from these studies is that if you don’t have a clear, communicated policy and an approved tool in place, shadow AI will fill the vacuum.
Employees are increasingly turning to shadow AI due to the pressure to work faster, manage multiple responsibilities, and adapt to rapidly changing workflows. Many have become comfortable using AI in their personal lives, seeing it as a natural way to achieve more with fewer resources at work.
A significant number of employees believe that AI is essential for enhancing productivity and advancing their careers. As a result, they may circumvent unclear or slow approval processes to utilise these tools. The Salesforce study mentioned above indicates that slow policy rollouts and ambiguous guidelines are major reasons why people opt for shortcuts with consumer tools.
In the absence of clear guidance and easy-to-use, authorised solutions, public AI tools become attractive alternatives. They are often free, easily accessible, and promise to simplify daily tasks. Research consistently shows that workers feel AI enhances their productivity and effectiveness, prompting many to disregard official policies to get their work done if an appropriate tool isn't supplied to them.
If bans worked, we wouldn’t be talking about Shadow AI. And the studies cited above would be telling a different story.
The lesson is that just saying “no” without offering a safe, approved “yes” doesn't work. People will find workarounds. The winning strategy is to channel the behaviour with a compliant, business‑ready tool and clear, human‑friendly rules.
The biggest risk is ending up with an accidental data breach.
Every time someone pastes a contract, a customer email thread, pricing model, source code, or even meeting notes into a public AI tool, you risk losing control of that information. Similarly, many companies use AI meeting summary services. Not all of these protect your data; some use it to train their AIs, especially those aimed at consumers and not businesses.
Samsung famously learned the chatbot lesson the hard way. In 2023, employees pasted confidential source code into a public chatbot, prompting the company to restrict AI use. This is a classic example of accidental leakage through well‑meaning use.
American educational heavyweights Harvard Law have even gone as far as writing about the legal ramifications of feeding data into AI chatbots. They caution that feeding confidential materials into public chatbots can make that information accessible to the platform's staff, and more importantly, if the model trains on it, it could surface that data to other users later.
The follow-on risk from such action is, of course, a potential data leak and resulting loss of reputation, revenue, and a GDPR fine from the ICO.
A big misconception is that “AI is AI.” But as we've alluded to above, how an AI tool handles your data varies widely.
Consumer AIs are probably safe to use for non‑sensitive tasks (e.g., brainstorming public blog topics), but they are not the right tool for customer data, pricing models, internal emails, or source code.
Staff are unlikely to make this distinction, and in the absence of an approved AI tool or a policy banning AI use, they will likely use what they are already familiar with and what's at hand.
There’s something else to consider when worrying about shadow AI: where the AI tool stores data and what local laws apply.
Some platforms explicitly store data on servers in countries where authorities can compel access, creating serious concerns for intellectual property protection and regulatory compliance.
The Chinese AI platform DeepSeek states in its privacy policy that it collects information and stores it on servers in the People’s Republic of China. Experts have warned that this raises national security and data privacy concerns because Chinese authorities can legally compel data access.
For UK businesses subject to UK GDPR and customer confidentiality obligations, sending data to tools that store or process it in jurisdictions with different privacy regimes is particularly risky.
If your employees are going to use AI (and they are), your best protection is to offer a safe, approved alternative and make it the easiest path. For many organisations running Microsoft 365, that means Microsoft Copilot.
Here’s what sets Copilot apart for business use:
In short, Copilot is designed to work within your existing security boundary, rather than shuttling your information to a public, consumer service. That’s why many organisations standardise on Copilot as their approved AI assistant.
You don’t need to be technical to fix this. Focus on governance, enablement, and simple controls—in that order.
Keep it simple, note which systems can be used and for what (it's fine to just say "use only approved AI systems" and then keep a separate list of approved AI applications.)
Be sure to expressly prohibit pasting confidential information into public chatbots, and using an AI system based in a high-risk jurisdiction.
Communicate the business reasons for the choice that's been made so staff understand why they can't just use ChaptGPT. Copilot does not train foundation models on your prompts/responses, it respects your permissions, and lives inside your tenant boundary.
You don’t have to become an AI expert to protect your business, and enable your staff. IT Foundations' team of Edinburgh based experts can help you implement AI in your business and remove the need for shadow AI.
3 min read
Breached or stolen passwords are linked to over 80% of data breaches. Hackers can get in using stolen, weak, or reused passwords. But passwords...
Learn about the new and useful features in Microsoft Copilot to help increase your productivity.
A high level plan for detecting, responding and recovering from ransomware.