AI Accuracy vs. Clinical Expertise
AI Models Beat Doctors in Complex Diagnosis Study
A Harvard-led study in Science reveals OpenAI’s o1-preview model achieved nearly 80% accuracy in rare cases.
A doctor in a white lab coat reviewing complex medical data on a glowing computer screen in a modern clinical setting.
Photo: Avantgarde News
Advanced AI reasoning models, specifically OpenAI's o1-preview, outperformed human doctors in diagnosing complex medical cases [1]. The study, led by Harvard and published in Science, found the AI achieved correct diagnoses in nearly 80% of challenging scenarios [1][2]. This success rate exceeds the performance of experienced clinicians and previous AI versions [1][3].
Researchers focused on rare diseases where human diagnostic error rates are typically higher [2]. The findings suggest AI could serve as a powerful tool for reducing diagnostic uncertainty in specialized medicine [1]. While the results show promise, experts are evaluating how these models can best assist doctors in clinical settings [3]. Details regarding the full integration of these models into healthcare remain under review.
Editorial notes
Transparency note
AI assisted drafting. Human edited and reviewed.
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
Reviewed for sourcing quality and editorial consistency.
Sources
- 1.↗
sciencenews.org
https://www.sciencenews.org/article/ai-help-doctors-help-diagnoses
- 2.↗
ndtvprofit.com
https://www.ndtvprofit.com/technology/openai-model-outperforms-doctors-to-diagnose-rare-diseases-report-11435457
- 3.↗
painnewsnetwork.org
https://www.painnewsnetwork.org/stories/2026/5/1/can-ai-outperform-human-doctors
Related stories
View allTopics
About the author
Avantgarde News Desk covers ai accuracy vs. clinical expertise and editorial analysis for Avantgarde News.