Background
Artificial intelligence (AI) has demonstrated the capability to support health-care providers with multiple elements of patient care such as diagnosis, creating treatment plans, and writing patient communications. However, its use requires careful consideration to maintain patient safety and well-being.
The College has been receiving inquiries from both registrants and key health partners on the appropriate use of AI in medicine. As AI is new and evolving, and has the potential to impact patient safety, there was a clear need for the College to provide expectations to registrants. As a result, the Ethical Principles for Artificial Intelligence in Medicine interim guidance was developed.
What is interim guidance?
Interim guidance sets out or clarifies the College’s position on an emerging issue or topic. It is intended as guidance for registrants in areas where research and current practice are evolving or changing rapidly, the implementation of processes and procedures may be premature, or it is timely to communicate the College’s stance on an issue.
The core principles
The Ethical Principles for Artificial Intelligence in Medicine interim guidance was developed following a review of the literature and a jurisdictional scan. Based on the research, the interim guidance encompasses the following key components:
- Privacy, confidentiality, and consent: Registrants are expected to ensure that patient privacy and confidentiality are maintained when using AI.
- Accuracy and reliability: Responsibility for decisions made about patient care rests principally with the registrant.
- Transparency: Registrants using AI must be transparent about the extent to which they are relying on such tools to make clinical decisions and must be able to explain to patients how these tools work and what their limitations are.
- Interpretability: When used in medicine, registrants must be capable of interpreting the clinical appropriateness of a result reached and exercising clinical judgement regarding findings.
- Bias: Registrants must be mindful of the inherent bias and critically analyze all AI-driven results or recommendations through an equity, diversity, and inclusion (EDI) lens.
- Monitoring and oversight: Registrants must monitor the use of AI in patient care to ensure that it is used appropriately and effectively.
Registrants are reminded to use their professional judgment when determining how AI can safely fit into their practice. As with any new technology, it is important to weigh both the pros and cons, and to carefully evaluate the impacts it may have.
The College will continue to monitor developments in this field and update its guidance as more information becomes available.
Questions about this document can be directed to communications@cpsbc.ca.