Across Canada, doctors and nurses are quietly using public artificial intelligence (AI) tools like ChatGPT, Claude, co-pilot And Gemini to jot down clinical notes, translate discharge summaries or summarize patient data. But while these services are fast and convenient, in addition they pose unseen cyber risks when sensitive health information is not any longer under the hospital's control.
New evidence suggests that this behavior is becoming more common. A current one Article quotes a study This shows that around one in five GPs within the UK report using generative AI tools similar to ChatGPT to jot down clinical correspondence or notes.
While Canadian-specific data stays limited, Anecdotal reports suggest similar informal ones Applications could appear in hospitals and clinics across the country.
(Unsplash/Solen Feyissa)
This phenomenon, referred to as Shadow AIrefers back to the use of AI systems without formal institutional approval or oversight. In healthcare, it refers to well-meaning doctors entering patient data into public chatbots that process information on foreign servers. Once this data leaves a secure network, there is no such thing as a guarantee as to where it should go, how long it should be stored, or whether it may possibly be reused to coach business models.
A growing blind spot
Shadow AI has quickly turn out to be some of the neglected threats in digital healthcare. A 2024 IBM Security Report found that the worldwide average cost of a knowledge breach rose to just about $4.9 million, the very best on record. While most attention is concentrated on ransomware or phishing, experts warn that insider and accidental data leaks now account for a growing share of all security breaches.
In Canada it’s Insurance Bureau of Canada and the Canadian Center for Cybersecurity Both have highlighted the rise in internal data disclosure, where employees unintentionally reveal proprietary information. When these employees use unauthorized AI systems, the road between human error and system vulnerability becomes blurred.
Are there any of those documented cases in healthcare? While experts indicate that internal data might be exposed growing risk in healthcare organizationsPublicly documented cases where using shadow AI is the cause remain rare. However, the risks are real.
Unlike malicious attacks, these leaks occur silently when patient data is solely copied and pasted into generative AI. No alarm sounds, no firewall is triggered and nobody notices that confidential data has crossed national borders. This allows shadow AI to bypass any protections built into a company's network.
Why anonymization isn’t enough
Even when names and hospital numbers are removed, health information is never truly anonymous. The combination of clinical details, time stamps and geographical clues can often enable re-identification. A study in showed that even large “anonymized” data sets may be matched to individuals with surprising accuracy when matched with other public information.
Public AI models further complicate the issue. Tools like ChatGPT or Claude process input via cloud-based systems that may temporarily store or cache data.
While providers claim to remove sensitive content, each has its own data retention policies and few disclose where these servers are physically situated. For Canadian hospitals subject to this Personal Data Protection and Electronic Documents Act (PIPEDA) and provincial data protection laws create a legal gray area.

(Unsplash/Zulfugar Karimov)
Everyday examples which are hidden in secret
Imagine a nurse using a web based translator powered by generative AI to assist a patient who speaks a distinct language. The translation appears immediately and appropriately – yet the input text, which can include the patient's diagnosis or test results, is distributed to servers outside of Canada.
Another example involves doctors using AI tools to jot down follow-up letters for patients or summarize clinical notes, unknowingly revealing sensitive information.
A recently Insurance Business Canada The report warned that shadow AI could turn out to be the “next big blind spot” for insurers.
Because the practice is internal and voluntary, most organizations don’t have metrics to measure its scope. Hospitals that don’t log AI usage cannot audit what data left their systems or who sent it.
Bridging the gap between policy and practice
Canada's healthcare privacy framework was developed long before the arrival of generative AI. Laws similar to PIPEDA and provincial health information laws regulate how data is collected and stored, but rarely mention machine learning models or large-scale text generation.
As a result, hospitals are forced to interpret existing rules in a rapidly evolving technological environment. Cybersecurity specialists argue that healthcare organizations need three levels of response:
1- Disclosure of AI use in cybersecurity audits: Routine security assessments should include a listing of all AI tools used, sanctioned or otherwise. Treat using generative AI the identical way corporations treat bring-your-own-device risks.
2- Certified “secure AI for health” gateways: Hospitals can offer approved, privacy-compliant AI systems that perform all processing in Canadian data centers. Centralizing access enables control without stifling innovation.
3- Data processing skills for workers: Training should make it clear what happens when data is entered right into a public model and the way even small fragments can compromise privacy. Awareness stays the strongest line of defense.
These steps won’t eliminate all risk, but they start to align frontline practice with regulatory intent and protect each patients and professionals.
The path ahead
Canada's healthcare sector is already under pressure from staff shortages, cyberattacks, etc growing digital complexity. Generative AI offers welcome relief by automating documentation and translation, but its uncontrolled use could undermine public trust in medical privacy.
Policymakers at the moment are faced with a selection: either proactively regulate using AI in healthcare facilities or wait for the primary major data protection scandal to force reform.
The solution isn’t to ban these tools, but to integrate them securely. Building national standards for “AI-safe” data processing, just like food safety or infection control protocols, would help be certain that innovation doesn’t come on the expense of patient confidentiality.
Shadow AI isn’t a futuristic concept; it’s already integrated into on a regular basis clinical practice. Addressing this problem requires a coordinated effort across technology, policy and training before Canada's healthcare system learns the hard way that essentially the most dangerous cyber threats can come from inside.

