Medical imaging just became safer and more precise. Researchers have developed an artificial intelligence system that can generate the crucial correction data needed for accurate PET scans without requiring additional CT scans, potentially reducing patient radiation exposure while maintaining diagnostic quality.
The key finding demonstrates that conditional diffusion models can create high-quality pseudo-CT images directly from non-attenuation-corrected PET scans. By analyzing brain scans from 159 patients, the system achieved a mean absolute error of 32 ± 10.4 Hounsfield Units and an average error of (1.48 ± 0.68)% in regions of interest when comparing reconstructions using AI-generated corrections versus traditional CT-based corrections.
The methodology employs a multiview ensemble approach using three separate 2D diffusion models that analyze different slice orientations: transverse, coronal, and sagittal. Each model uses a UNet backbone with ResNet blocks and attention gates to focus on relevant anatomical features. The system combines predictions from all three orientations through a majority voting process that averages consistent predictions while excluding outliers, creating a final 3D correction map.
Results show significant improvements in image quality metrics. With majority voting, the system achieved peak signal-to-noise ratios of 29.9 ± 2.3 dB (transverse), 29.4 ± 2.1 dB (coronal), and 29.8 ± 2.0 dB (sagittal), with structural similarity indices of 0.924 ± 0.030, 0.916 ± 0.032, and 0.919 ± 0.030 respectively. The ensemble approach reduced root mean square error from 131.8 ± 33.9 HU to 128.5 ± 31.0 HU in the best-performing transverse model.
This advancement matters because traditional attenuation correction requires additional CT scans, exposing patients to extra radiation and creating potential misalignment issues between PET and CT sequences. The AI approach eliminates both concerns while using existing PET scanner infrastructure. For patients undergoing regular monitoring, this could significantly reduce cumulative radiation exposure without compromising diagnostic accuracy.
Limitations include challenges with nasal and sinus regions where air-filled cavities and thin bones provide minimal detail in PET images. The model also struggled with shoulder areas due to low tracer uptake and edge-of-field amplification effects. The training used data from a single scanner type (Siemens Biograph Vision), so performance on different scanner models remains unknown. Additionally, the system hasn't been tested on patients with large tumors, skull plates, or significant anatomical variations.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn