Risks in Specialized Medical Advice

AI Chatbots Give Misleading Medical Advice 50% of the Time

A BMJ Open study finds major AI tools provide inaccurate health information and fabricated citations.

By Avantgarde News Desk··1 min read
An editorial illustration of a smartphone screen showing a complex medical chart with a stethoscope lying on top, symbolizing the intersection of AI technology and healthcare advice.

An editorial illustration of a smartphone screen showing a complex medical chart with a stethoscope lying on top, symbolizing the intersection of AI technology and healthcare advice.

Photo: Avantgarde News

Five popular AI chatbots provide "somewhat" or "highly" problematic medical advice in half of all cases, according to a study published in BMJ Open [1][2]. Researchers tested ChatGPT, Gemini, Meta AI, Grok, and DeepSeek against common health queries [1][3]. The audit revealed that these tools frequently present inaccurate information or fabricated citations as factual [2]. These AI tools often respond with high confidence even when providing misinformation [1]. This trend is particularly dangerous in specialized areas such as nutrition and stem cell therapies [2][3]. Experts warn that reliance on these automated tools for high-risk medical decisions poses significant safety concerns [1].

Editorial notes

Transparency note

Drafted with LLM; human-edited

AI assisted
Yes
Human review
Yes
Last updated

Risk assessment

Minimal

Reviewed for sourcing quality and editorial consistency.

Sources

Related stories

View all

Topics

Get the weekly briefing

Weekly brief with top stories and market-moving news.

No spam. Unsubscribe anytime. By joining, you agree to our Privacy Policy.

About the author

Avantgarde News Desk covers risks in specialized medical advice and editorial analysis for Avantgarde News.

AI Chatbots Fail Medical Accuracy Test in BMJ Open Study