A new artificial intelligence method can create detailed 3D medical images from simple 2D scans, potentially transforming how doctors monitor cancer treatments while reducing patient scan times and hospital costs. This breakthrough addresses a critical limitation in nuclear medicine where conventional approaches require lengthy, expensive procedures to obtain three-dimensional data that's essential for accurate treatment planning.
Researchers discovered that AI models can reconstruct complete 3D activity distribution maps using only two standard 2D planar images - the anterior and posterior views typically collected during scintigraphy procedures. This achievement was previously considered unattainable with conventional reconstruction methods, which cannot generate three-dimensional data from just two projection angles. The team demonstrated that their AI approach produces activity maps with approximately 20% reduction in mean absolute error and 5% improvement in structural similarity compared to traditional methods.
The methodology leverages patient-specific anatomical information from pre-existing PET/CT scans to train AI models. The researchers created simulated datasets by generating variations of possible radiotracer uptake patterns within individual patient anatomies, including activity level variations and small rotations and translations. They explored two distinct AI approaches: a supervised 3DResUnet model that directly learns the mapping from 2D inputs to 3D outputs, and an unsupervised diffusion model that generates solutions consistent with the measured 2D projections through an iterative refinement process. Both methods were trained on patient-specific datasets rather than generic population data.
Results from validation on four patient cases showed the AI models successfully reconstructed 3D activity distributions from real planar scintigraphy data. The diffusion model approach achieved a structural similarity index of 0.89 during training and 0.73 when tested on actual patient scans, compared to 0.65 for conventional model-based iterative reconstruction methods. The research demonstrated better lesion delineation, particularly for prostate cancer cases, with the AI-generated 3D maps showing improved organ boundary definition and anatomical accuracy compared to traditional approaches. Figure 3 in the paper illustrates how arrows pointing to prostate lesions show better definition in the AI reconstructions than in conventional methods.
This advancement matters because it could make quantitative dosimetry - the precise measurement of radiation dose distribution in tissues - more accessible for cancer treatments using radiopharmaceuticals like 177Lu-PSMA for prostate cancer. By eliminating the need for time-consuming SPECT scans that can take over an hour and require expensive equipment, hospitals could reduce costs and resource dedication while maintaining treatment monitoring accuracy. Patients would benefit from shorter scan times and potentially more frequent monitoring during therapy courses.
The research acknowledges several limitations, including the small validation set of only four cases and the need for further testing on lesion identification accuracy. The models currently don't explicitly account for scattered radiation effects, and the team notes that generic models trained on population data rather than patient-specific data can produce unrealistic anatomical features like enlarged or reduced organs. Future work will focus on expanding validation, improving model architectures, and evaluating performance under different noise conditions and bias scenarios.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn