AIResearch AIResearch
Back to articles
Data

AI Improves Medical Image Analysis with Ordered Labels

New loss functions help AI understand the natural order in medical images, leading to more consistent and accurate segmentations for tasks like tumor grading and tissue layering.

AI Research
April 01, 2026
4 min read
AI Improves Medical Image Analysis with Ordered Labels

A new approach to training artificial intelligence for medical image segmentation addresses a critical limitation in how these systems interpret ordered classes, such as disease severity or anatomical layers. Traditional AI models often treat all classification errors equally, ignoring that in many medical contexts, mislabeling a pixel as an adjacent class is less severe than assigning it to a distant one. This oversight can lead to less reliable in applications like grading skin conditions or mapping tissue depth. By incorporating ordinal relationships into the training process, researchers have developed s that enhance both the accuracy and structural coherence of segmentations, offering potential improvements in clinical diagnostics and treatment planning.

The key finding from this research is that using specialized loss functions that respect the natural order of labels significantly improves the consistency and quality of medical image segmentations. The study tested three main loss functions: Quasi-Unimodal Loss (QUL), Expectation Mean Squared Error (EXP MSE), and Contact Surface Loss using Signed Distance Function (CSSDF). These functions were applied to five medical datasets, including Breast Aesthetics, Cervix-MobileODT, Mobbio, Teeth-ISBI, and Teeth-UCV, which involve tasks like evaluating breast cancer treatment aesthetics and segmenting dental structures. showed that these ordinal losses reduced structural inconsistencies between adjacent pixels and maintained or improved segmentation accuracy compared to standard cross-entropy loss.

Ology involved adapting loss functions originally designed for ordinal classification to the semantic segmentation setting, using a U-Net architecture trained on medical images. The researchers employed a hybrid approach, combining standard cross-entropy loss with ordinal loss functions through a parameter λ to balance categorical and ordinal penalties. For example, QUL encourages predicted probability distributions to be quasi-unimodal, meaning probabilities peak around the true class, while EXP MSE models ordinal distance and penalizes discrepancies in expected values and variance. CSSDF enforces spatial consistency by penalizing abrupt transitions between ordinally distant classes in neighboring pixels. The models were trained with data augmentation, Adam optimizer, and 5-fold cross-validation, following strict experimental protocols to ensure fair comparisons.

Analysis of , detailed in tables within the paper, reveals that ordinal loss functions consistently outperformed the baseline. For instance, in the contact surface metric, which measures structural consistency between adjacent pixels, CE+QUL reduced errors from 14.5% to 1.2% on the Cervix-MobileODT dataset, and CE+EXP MSE achieved similar improvements. The Dice coefficient, a measure of segmentation accuracy, showed modest gains, with CE+QUL increasing scores from 93.8% to 94.4% on Breast Aesthetics. The unimodal pixels metric, assessing pixel-level ordinal coherence, indicated that structured losses like CSSDF enhanced consistency, though unimodal losses like QUL and EXP MSE provided better overall balance between structural and accuracy metrics. These suggest that enforcing ordinal relationships does not compromise performance and can lead to more robust segmentations.

Of this work are significant for medical imaging, where ordinal relationships are common but often overlooked. By improving the AI's ability to handle ordered classes, these s could lead to more reliable tools for tasks such as tumor grading, where severity levels must be accurately distinguished, or anatomical segmentation, where layers need to be delineated precisely. This could enhance diagnostic accuracy, reduce manual annotation burdens, and support personalized treatment plans. The study highlights the importance of incorporating domain knowledge into AI training, moving beyond purely data-driven approaches to create models that better align with clinical realities.

However, the research acknowledges limitations, including the need for further validation across more diverse medical datasets and imaging modalities. The study focused on specific loss functions and architectures, and their performance may vary in other contexts. Additionally, while ordinal losses improved metrics like contact surface and unimodal pixels, gains in the Dice coefficient were sometimes modest, indicating that further optimization may be needed. The paper also notes that overly strong enforcement of pixel-level unimodality, as in some loss functions, may not always translate to better global segmentation, suggesting a trade-off between local and global consistency. Future work could explore combining different ordinal losses or adapting them to real-time applications.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn