Tags:Data Fusion, Explainable Artificial Intelligence and Knowledge Transfer
Abstract:
When artificial intelligence (AI) and machine learning (ML) models are applied in healthcare, the ability to understand and explain model decisions is an important aspect. Methods in the field of explainable AI (XAI) have been developed to create explanations for such decisions, which provides transparency and trust to the prediction model. However, the use of model explanations with the purpose of improving prediction performance remains unexplored. Our proposed Explanation Supported Learning (XSL) framework can improve classification performance for ML models used in medical imaging systems, while also providing a new understanding of how medical images are processed by deep learning (DL) models. The XSL framework consists of novel methods to achieve knowledge transfer from one or several teacher models to a student model. The novelty lies in using explanations from the teacher models, obtained from XAI techniques, as added features when training the student model. This approach enables flexible knowledge transfer between models of different architecture types. We further demonstrate how the XSL framework can be used as a new metric for measuring the quality of the explanations provided by XAI methods. The achievement of increased performance in this framework requires that the chosen XAI technique contains useful information based on the learned understanding of the input data by the teacher models. By testing XSL on the HyperKvasir gastrointestinal image dataset, we achieved significant increases in most of the measured classification metrics, and exceeded most benchmark scores of the HyperKvasir paper. A link to our code repository will be provided upon acceptance.
Explanation Supported Learning: Improving Prediction Performance with Explainable Artificial Intelligence