Download PDFOpen PDF in browser

Explainable Deep Learning Models in Medical Imaging

EasyChair Preprint no. 13834

15 pagesDate: July 6, 2024

Abstract

Medical imaging has significantly benefited from advancements in deep learning, leading to improved diagnostic accuracy and efficiency. However, the opacity of deep learning models has hindered their broader acceptance in the clinical setting. Explainable deep learning models address this issue by providing insights into model decision-making processes, ensuring transparency, reliability, and trustworthiness in medical diagnostics.

Objectives:

This research aims to explore the development and application of explainable deep learning models in medical imaging. The primary objectives are:

  1. To review the current state-of-the-art methods for explainability in deep learning applied to medical imaging.
  2. To identify the challenges and limitations associated with existing explainability techniques.
  3. To propose novel methodologies or improvements to enhance the explainability of deep learning models in medical imaging.
  4. To evaluate the proposed methodologies through comprehensive experiments on various medical imaging datasets.

Methods:

The research will adopt a multi-phase approach encompassing literature review, methodology development, and empirical validation. Initially, a systematic review of existing literature will be conducted to categorize and analyze current explainability techniques such as saliency maps, attention mechanisms, and concept attribution methods. Building on this foundation, novel approaches or enhancements to existing methods will be developed to address identified gaps. These methodologies will be integrated into popular deep learning architectures used in medical imaging, such as convolutional neural networks (CNNs) and transformers. Experiments will be conducted using diverse medical imaging datasets, including but not limited to, MRI, CT, and X-ray images.

Keyphrases: Attention Mechanisms, Clinical Trustworthiness., Convolutional Neural Networks, deep learning, Diagnostic Accuracy, Explainable AI, interpretability, Medical Imaging, saliency maps

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13834,
  author = {Kayode Sheriffdeen},
  title = {Explainable Deep Learning Models in Medical Imaging},
  howpublished = {EasyChair Preprint no. 13834},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser