AIResearch AIResearch
Back to articles
AI

AI That Explains Its Medical Decisions Gains Doctor Trust

New method helps AI pinpoint exactly where abnormalities appear in medical scans and explain its findings using location-specific text, improving both accuracy and clinician confidence.

AI Research
November 14, 2025
3 min read
AI That Explains Its Medical Decisions Gains Doctor Trust

Artificial intelligence systems that analyze medical images face a critical hurdle: doctors don't trust black box decisions. When an AI identifies pneumonia in a chest X-ray but can't show exactly where it's looking, clinicians remain skeptical. A new approach solves this problem by forcing AI to identify and explain the specific locations of abnormalities, making its reasoning transparent and verifiable.

Researchers developed an explainable AI system that guarantees it focuses on regions containing actual medical anomalies. Unlike current methods that generate heat maps or visual explanations after the fact, this approach ensures the AI's attention overlaps with the actual location of abnormalities. The system combines text analysis of radiology reports with image processing to create a self-justifying diagnostic tool.

The method works in two stages. First, it analyzes the text of radiology reports using a Bi-Directional Long Short-Term Memory network to extract location-specific information. This identifies where radiologists typically describe abnormalities appearing in chest X-rays, such as the upper zone, lower hemidiaphragm, or cardiophrenic angle. The system then uses this textual information to create approximate bounding boxes around potential anomaly locations.

In the second stage, an attention-guided inference network processes the medical images while being forced to focus on these identified regions. The network uses a ResNet-101 architecture trained with additional constraints that penalize it when it highlights irrelevant areas. This ensures the AI concentrates on medically significant regions rather than arbitrary patterns in the image.

The results show significant improvements in both accuracy and interpretability. For detecting left lung opacity, the system achieved an area under the precision-recall curve of 0.67 compared to 0.63 for standard methods, while the area under the receiver operating characteristic curve improved from 0.74 to 0.77. More importantly, the generated attention maps clearly show the system focusing on the actual anomaly locations rather than random areas of the image.

This breakthrough matters because it addresses the fundamental trust problem preventing widespread AI adoption in medicine. Current AI systems can achieve high accuracy but fail to convince doctors because they can't explain their reasoning. This method provides concrete evidence that the AI is looking in the right places, making its decisions verifiable rather than mysterious. In practical terms, it could speed up radiology workflows by automatically highlighting and explaining suspicious areas, allowing doctors to focus their attention more efficiently.

The approach does have limitations. While it improves localization accuracy, the correlation between better localization and overall diagnostic performance remains unclear. The system has been tested primarily on chest X-rays with opacity detection, and its effectiveness with other types of medical images and abnormalities requires further validation. Additionally, the method relies on having radiology reports available for training, which may limit its application in settings where such detailed textual data isn't available.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn