Volume 11 | No. 3 | May / Jun 2023 query_builder 1 minute

Appropriate use of large language models (such as ChatGPT) in medical practice

Laptop on desk

Share

The World Health Organization (WHO) recently released a statement calling for the safe and appropriate use of AI-generated large language model (LLM) tools in health care. While there are many potential benefits of LLMs in enhancing health-care practices, the College reminds registrants that LLMs are designed to complement medical care and cannot replace sound clinical judgement.

AI-generated large language model (LLMs), such as OpenAI's ChatGPT and Google’s Bard, have demonstrated the capability to assist providers with elements of care such as diagnosis, creating treatment plans and writing patient communications. However, the WHO's concerns emphasize the need for caution. One of the foremost concerns is the potential for LLMs to provide inaccurate or misleading information that could inadvertently harm patients. 

Although LLMs are proficient in generating responses that appear to be accurate and relevant, they can be partially or completely wrong, leading to erroneous decision-making if relied upon. Even when wrong, LLMs possess the ability to convincingly generate confident and authoritative responses, making it essential for registrants to ensure the accuracy and reliability of the information they provide. 

Furthermore, the use of data without appropriate consent raises concerns regarding the protection of sensitive patient information. As stewards of patient privacy, registrants must exercise caution when engaging with LLMs and prioritize the ethical use of patient data. Identifiable patient information must never be entered in an LLM prompt. 

The primary purpose of most AI applications is to complement clinical care and assist health-care providers, rather than supplanting the perspective of trained medical professionals. By exercising professional judgment and adhering to the expected standard of care, registrants can effectively integrate AI technologies into their practice to achieve optimal patient outcomes.

Registrants using AI tools such as LLMs (ChatGPT, Bard, etc.) should review the WHO’s fundamental principles for AI regulation and governance.