By Milcah Tanimu
A recent study led by Long Island University (LIU) in Brooklyn, New York, found that ChatGPT, an AI chatbot developed by OpenAI, provided inaccurate information about drug usage. The study revealed that nearly 75% of drug-related responses from ChatGPT were incomplete or incorrect, as per pharmacist reviews. The American Society of Health System Pharmacists (ASHP) noted that some responses could potentially endanger patients.
The study, presented at ASHP’s Midyear Clinical Meeting, reported that ChatGPT generated “fake citations” when asked for references to support its responses. Lead author Sara Grossman, PharmD, expressed surprise at ChatGPT’s ability to provide background information quickly but highlighted its failure in generating accurate and complete responses.
One example cited involved a question about a potential drug interaction between Paxlovid, a COVID-19 antiviral, and verapamil, a blood pressure medication. ChatGPT incorrectly responded that no interactions had been reported, while, in reality, the combination could lead to excessive lowering of blood pressure.
OpenAI, the developer of ChatGPT, reiterated that its models are not fine-tuned for medical information and should not be used for serious medical conditions. The company emphasized the importance of verifying information obtained from trusted sources and included disclaimers for AI usage in healthcare.
Healthcare professionals have a crucial role in guiding and critiquing evolving AI technologies like ChatGPT, ensuring they complement, rather than replace, professional medical judgment. The study did not evaluate responses from other generative AI platforms, leaving room for further investigation into their accuracy in similar conditions.**