Abstract
Explainable Artificial Intelligence (XAI) is an emerging machine learning field that has been successful in medical image analysis. Interpretable approaches are able to “unbox” the black-box decisions made by AI systems, aiding medical doctors to justify their diagnostics better. In this paper, we analyze the performance of three different XAI strategies for medical image analysis in ophthalmology. We consider a multimodal deep learning model that combines optical coherence tomography (OCT) and infrared reflectance (IR) imaging for the diagnosis of age-related macular degeneration (AMD). The classification model is able to achieve an accuracy of 0.94, performing better than other unimodal alternatives. We analyze the XAI methods in terms of their ability to identify retinal damage and ease of interpretation, concluding that grad-CAM and guided grad-CAM can be combined to have both a coarse visual justification and a fine-grained analysis of the retinal layers. We provide important insights and recommendations for practitioners on how to design automated and explainable screening tests based on the combination of two image sources.
Original language | English |
---|---|
Article number | e0311811 |
Journal | PLoS ONE |
Volume | 19 |
Issue number | 11 November |
DOIs | |
State | Published - Nov 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 Vairetti et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.