Abstract:
Artificial Intelligence systems have the potential to revolutionize the field of medicine by increasing the efficiency of the healthcare sector and improving the quality of care. However, this transformation requires trust from medical professionals in these systems, a trust that can only be achieved through understanding. For this reason, a new research field has emerged: eXplainable Artificial Intelligence (XAI), which aims to explain the decision-making process of Artificial Intelligence algorithms. XAI is essential in high-stakes environments, such as medicine, where a wrong decision can seriously affect human lives. This study aims to analyse, using various explainability methods, the decision of a diagnostic support model for Distal Myopathies, a rare form of Neuromuscular disease. It also proposes new explainability techniques: a novel approach to occlusion, called hierarchical occlusion and the use of ensemble methods to combine individual explanations to generate more refined outputs. Finally, it evaluates the results of explainability methods through the feedback of different expert observers and discusses their performance, limitations and the potential impact on trust and usability in clinical practice.