As artificial intelligence systems become more integrated into critical decision-making processes, from healthcare to finance, their inability to clearly explain why they reach certain conclusions has become a major barrier to trust and adoption. Researchers have developed a new approach that helps AI systems provide more understandable explanations tailored to different users' needs, addressing a fundamental challenge in making AI more transparent and accountable.
The key finding from this research is that AI systems can be designed to generate multiple types of explanations that answer different kinds of user questions. The researchers created what they call an Explanation Ontology—a structured framework that defines various explanation types and how they should be generated. This allows systems to provide explanations like contrastive explanations (helping users understand why one option was chosen over another), trace explanations (showing the reasoning steps behind a decision), and counterfactual explanations (demonstrating how changing inputs would alter the outcome).
The methodology involved developing a step-by-step protocol that system designers can follow to build explanation capabilities into AI systems. First, researchers gather user questions through studies to understand what explanations people actually need. Then, they align these questions with appropriate explanation types from the ontology framework. Finally, they plan how the system will generate these explanations by identifying what information needs to be included and under what conditions each explanation type should be triggered.
The results show this approach works effectively in real-world settings. In a clinical decision support system for type-2 diabetes treatment, the framework helped generate explanations that answered clinicians' practical questions. For example, when a clinician asked "What if the patient had an ASCVD risk factor?" the system could generate a counterfactual explanation showing how this additional risk factor would change the recommended treatment. The researchers provided specific examples of pre-populated explanations that the system could generate, including contrastive explanations to help decide between drugs and trace explanations to expose the clinical guidelines behind suggested treatments.
This matters because it makes AI systems more usable and trustworthy in high-stakes environments. In healthcare, clinicians need to understand why an AI system recommends certain treatments before they can confidently use its suggestions. The framework provides a standardized way for AI developers to build explanation capabilities that address real user needs rather than just technical transparency. The researchers have made their resources openly available online, including competency questions that help designers identify what explanations their systems should provide.
The approach does have limitations. The current implementation focuses on rule-based systems and guideline-driven decisions, so its effectiveness with more complex machine learning models remains to be fully tested. Additionally, while the framework supports multiple explanation types, generating all of them effectively across different domains requires further development. The researchers note they are investigating how to integrate machine learning approaches with their semantic encoding framework to expand its capabilities.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn