Artificial Intelligence (“AI”) has quickly worked its way into daily workplace operations, from drafting documents to scheduling meetings to summarizing discussions. While these tools can improve efficiency, they also introduce serious privacy risks when not properly managed. A recent report from the Information and Privacy Commissioner of Ontario (“IPC”) is a timely reminder that even well-intentioned AI use can create significant liability for employers if safeguards are not in place.
On October 27, 2025, the IPC released its findings on a hospital’s self-reported privacy breach under Ontario’s Personal Health Information Protection Act [PHIPA]. The breach involved a generative AI platform designed to assist with note transcription and meeting management, which inadvertently recorded and transcribed a virtual meeting where confidential patient information was discussed. The hospital did many things right—it quickly reported, investigated, and cooperated with the IPC—but the incident still revealed preventable gaps that many employers may also be overlooking.
How the Breach Occurred
According to the IPC, the breach occurred because of two security lapses, both unrelated to the AI tool itself but which nonetheless allowed it to access highly sensitive information.
Use of Personal Email in Violation of Policy
A hospital physician, who had left the organization several months prior to the breach, had previously used his personal email, rather than his work email, to register for a recurring staff meeting, contrary to hospital policy.
Failure to Remove the Former Employee from Meeting Invites
When that physician departed in June 2023, the hospital did not remove him from the recurring meeting invitation.
Fast forward to September 2024, when the former physician installed a generative AI application on his personal device. The platform, designed to transcribe notes, provide insights, and manage meetings, synced with his digital calendar. Because he remained on the hospital’s meeting invite, the AI tool automatically joined a virtual hospital rounds meeting involving current staff without his knowledge and on his behalf.
Once inside, the AI tool recorded and transcribed the discussion. The breach was not discovered until the AI platform emailed a summary of the meeting, including patient names, diagnoses, and treatment information, to both current and former staff in the meeting group.
A simple oversight spiraled into a significant privacy incident.
What the IPC Recommended
To prevent similar incidents, the IPC issued several recommendations for the hospital, many of which apply broadly to employers across all sectors:
- Request deletion of the transcript from the AI platform to ensure improperly collected data is purged;
- Strengthen breach-response protocols by requiring immediate outreach to third-party service providers when AI or other tools collect personal information improperly;
- Revise policies related to the use of electronic devices to prohibit the use of personal devices for work activities involving sensitive data;
- Add virtual meeting “lobbies” for any meeting where confidential information may be discussed to verify attendee identity before the meeting begins; and
- Improve offboarding practices so access rights, including calendar invites, are fully revoked when employees leave.
For employers, the key message is clear: AI can drastically expand the reach and speed of a breach when administrative controls fall through the cracks.
The Hidden Liabilities of AI Adoption in the Workplace
The hospital’s breach is only one example of the broader risks employers face when integrating AI into their operations. Other liabilities could include:
Confidentiality Concerns
Without clear policies, training, and technical safeguards, employees may unknowingly input confidential, personal, or proprietary information into generative AI tools. Many users may be unaware of how these platforms store or share data, or that certain tools may retain information to improve their algorithms.
Inaccuracies and “Hallucinations”
AI does not always get it right. Tools may generate inaccurate information with high confidence, and employees may assume these outputs are reliable. When these inaccuracies influence decisions such as HR processes or client communications, they can create legal and operational problems that are difficult to unwind.
Bias and Discrimination Risks
AI systems reflect the data they were trained on. If that data carries bias, AI-driven hiring assessments, performance tools, or screening mechanisms can unintentionally produce discriminatory outcomes.
Over-Reliance and Operational Risk
When organizations depend heavily on AI systems without adequate oversight or backup processes, they risk operational disruption. An AI malfunction or incorrect output may go unnoticed until it has materially affected decisions.
AI can be transformative, but only when paired with thoughtful governance. Organizations that invest early in clear policies, training, and safeguards will be better positioned to harness the power of AI without facing preventable privacy or compliance failures.
Takeaways for Employers
- Develop clear AI-use policies: Employers should create detailed guidelines outlining what AI systems employees are permitted to use, what types of data can and cannot be entered, and activities that are explicitly prohibited. Policies should also address privacy expectations, data handling, and oversight responsibilities.
- Strengthen data-privacy protections: Employers should expressly prohibit employees from entering sensitive, confidential, or identifying information—whether personal, client-related, or proprietary—into generative AI tools. Employers should further implement checks to prevent unauthorized disclosure.
- Provide robust training: Employers should implement and regularly update training programs and materials to ensure employees understand how AI tools work, their limitations and risks, when human oversight is required, and how to handle sensitive information safely.
This blog is provided as an information service and summary of workplace legal issues.
This information is not intended as legal advice.