Hallucinated Citations and Clinical Risks

AI Chatbots Fail 50% of Medical Queries in New Study

A BMJ Open audit reveals that nearly half of AI chatbot responses to medical queries are inaccurate or incomplete.

By Avantgarde News Desk··1 min read
A close-up of a person's hand holding a smartphone displaying a chat interface with a red warning icon, set against a blurred medical office background.

A close-up of a person's hand holding a smartphone displaying a chat interface with a red warning icon, set against a blurred medical office background.

Photo: Avantgarde News

A study in BMJ Open found five AI chatbots provide poor medical advice half the time [1][2]. Researchers evaluated responses to health queries and found 50% were inaccurate or incomplete [1]. These findings raise concerns about the reliability of artificial intelligence for health information [3]. The audit also highlighted frequent hallucinations where the chatbots invented scientific citations [1]. These tools often provided links or references to studies that do not exist [2]. Experts warn that these systems are not suitable for clinical decision-making or self-diagnosis [3].

Editorial notes

Transparency note

Drafted with LLM; human-edited

AI assisted
Yes
Human review
Yes
Last updated

Risk assessment

Elevated

The topic involves medical misinformation risks and public health safety.

Sources

Related stories

View all

Topics

Get the weekly briefing

Weekly brief with top stories and market-moving news.

No spam. Unsubscribe anytime. By joining, you agree to our Privacy Policy.

About the author

Avantgarde News Desk covers hallucinated citations and clinical risks and editorial analysis for Avantgarde News.

AI Chatbots Fail 50% of Medical Queries in New Study | Avantgarde