This is the fourth article of our series discussing practical and evidentiary issues in medical malpractice. Each article will examine recent medical malpractice case law and focus on practical and evidentiary issues within them. The goal is to provide some useful insight into the obstacles that occurred in hopes that future cases can adapt and develop new ways to overcome these challenges.
Artificial intelligence (“AI”) has the potential to radically transform the way healthcare is delivered in Canada. From interpreting imaging to streamlining workflows, AI tools are quickly becoming a routine part of clinical practice. Many see this as a positive shift: by reducing human error and alleviating administrative burdens, AI has the potential to address many of the underlying causes of adverse patient outcomes, including poor documentation and communication breakdowns.
This transformative potential, however, carries a host of legal uncertainties. While much has been written about how courts might address novel uses of AI in diagnostics and clinical decision-making1, far less attention has been given to the more routine, administrative uses of AI. Unlike diagnostic tools, administrative applications of AI remain unregulated in Canada. According to the Royal College of Physicians and Surgeons of Canada’s Task Force Report on Artificial Intelligence and Emerging Digital Technologies, such tools have the potential to “liberate physicians from repetitive tasks, allowing time for more patient care, including compassionate care, and improving the safety and quality of patient care.”2 Yet, from a medical malpractice perspective, these seemingly low-risk technologies may pose significant and immediate risks to both patient safety and the integrity of the litigation process.
This article explores the complex interplay between AI and medical malpractice, with a specific focus on digital scribes: AI-powered systems that transcribe and summarize clinical encounters. While digital scribes offer the promise of reduced administrative burden, they also raise significant legal risks that medical malpractice lawyers must be alert to in reviewing documents and litigating lawsuits.
AI in the Canadian Healthcare Landscape
Artificial intelligence is notoriously difficult to define. For this reason, the Canadian Medical Protective Association (“CMPA”) does not offer a single definition of AI, but instead enumerates its fundamental features: the technology is designed to achieve specific objectives; it operates with a degree of autonomy in pursuing those objectives; data is provided as input; and it produces output in the form of information, ranging from a recommended solution to a problem to a command directing a robotic device.3
In both legal and medical discourse, a distinction has emerged between clinical and administrative uses of AI.4 Health Canada, along with the CMPA, draws this line explicitly. AI tools that serve a clinical or medical purpose — such as assisting in diagnosis, monitoring vital signs, or interpreting medical imaging — are regulated as “software as a medical device” (SaMD). These tools are subject to oversight under the Food and Drugs Act and related Health Canada guidance.5