Alexis is director of content at AIMed, with responsibility for the research, development and delivery of products across events, digital and publishing. A highly experienced events executive with a career focus on the intersection between healthcare and technology, he is also a school governor leading on teaching, learning, and quality of education.
The emergency department was overcrowded with patients waiting long hours to receive care. So machine learning was deployed to improve the flow by examining the preliminary data collected from patients when they arrived. Patients who required testing (X-ray, blood test) were identified and tests were automatically ordered before the patient was seen by an emergency physician (it’s been estimated that 25% of emergency patients can be expedited with AI as patients who do not require any testing can be attended first. While those who require testing, will have their results ready when physicians attend to them.)
The above scenario was one of the two actual case studies raised last October during a virtual meeting supported by the Canadian Institute for Advanced Research (CIFAR) and led by Dr. Colleen M. Flood, the University of Ottawa Research Chair in Health Law and Policy and the Lead of AI + Health, part of the University’s AI + Society Initiative designed to policy the framework for the future of healthcare. This was the first time the University had hosted an assembly of interdisciplinary experts in AI, law, medicine, ethics, and policy to address privacy, safety, and quality-based concerns surrounding the use of AI in healthcare.
At the meeting, attendees were free to join one of the three “breakout sessions” on law and privacy, law and safety and ethics to further review each case study under specific themes and explore how Canada can maximize the potential benefits of AI without compromising the present standards. All discussions and dialogue were compiled into a report entitled “AI & Health Care: A Fusion of Law & Science. An Introduction to the Issues” released last month. The meeting outlined many regulatory issues including how to ensure AI tools used in healthcare settings meet safety and quality standards; what to do when AI and care providers disagree on an option; challenges in providing patients informed consent on the use of AI tools and so on.
The University of Ottawa is the home to many AI research efforts. It is part of the Ottawa AI Alliance, a scientific society supported by the National Research Council Canada which hosts an annual event for some of the top AI and machine learning scientists to share their findings and breakthrough in the domains. The institution realizes the rapid pace of AI development will not only change the face of modern societies but also create new ethical, legal and policy challenges that ultimately affect the public. There is a pressing need to focus on the growing influence of AI and its implications to frame future policy.
As such, the AI + Society Initiative was founded last January with funding from Scotiabank to support research programs related to AI and Inclusion and AI and Regulation. The Initiative wishes to define or redefine problems and identify solutions related to AI and technology development. Specifically, promoting an agenda that will minimize global digital injustices and design an effective and ethical framework for inclusive participation, as well as a better response to the evolution of the technology.
During the summer, the AI + Society Initiative expanded with two more streams of research: AI and the Future of Healthcare (AI + Health) and AI and AI and the Future of environment (AI + Environment) with the support given by Alex Trebek Forum for Dialogue. As AI will not stop at helping medical professionals but will become an integral part of the medical decision-making team, the Initiative believes future research in this field will survey and facilitate the emergence of salient legal issues that impact the appropriate adoption of AI into the healthcare system. AI and the Future of Healthcare is presently led by Dr. Flood, an expert in comparative healthcare law and policy.
After the initial AI + Health meeting, Dr. Flood and her interdisciplinary team acknowledged that while there is no bespoke legislation regulating AI in healthcare in Canada now, there are many existing regulations that account for parts of the many issues that were highlighted. It is worth considering whether existing laws are suited to address the complexity of AI-related regulatory issues. Dr. Flood and her team are working on the next CIFAR-sponsored session targeting AI-empowered medical devices, their safety, and possible discriminatory practice or violation of privacy.