Tags:Brain Tumor, Deep Learning, Healthcare and xAi
Abstract:
In recent years, thanks to improved computational power and the collection of data, AI has become a fundamental tool in basic research and industry. Despite this very rapid development, AI and deep neural networks remain black boxes that are difficult to explain. While a multitude of explainability (xAI) methods have been developed, their effectiveness and usefulness in realistic use cases is understudied. This is a major limitation in the application of these algorithms in sensitive fields such as clinical diagnosis, where the robustness, transparency and reliability of the algorithm are indispensable for its use. In addition, the majority of works have focused on feature attribution (e.g., saliency maps) techniques, neglecting other interesting families of xAI methods such as data influence models. The aim of this work is to implement, extend and test, for the first time, data influence functions in a challenging clinical problem, namely, the segmentation of tumour brains in MRI. We present a new methodology to calculate an influence score that is generalizable for all semantic segmentation tasks where the different labels are mutually exclusive.
Tracin in Semantic Segmentation of Tumor Brains in MRI, an Extended Approach