Why we do need explainable AI for healthcare

Open Access
Authors
Publication date 02-12-2025
Journal Diagnostic and Prognostic Research
Article number 24
Volume | Issue number 9
Number of pages 8
Organisations
  • Faculty of Economics and Business (FEB)
  • Faculty of Economics and Business (FEB) - Amsterdam Business School Research Institute (ABS-RI)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The recent uptake in certified Artificial Intelligence (AI) tools for healthcare applications has renewed the debate around their adoption. Explainable AI, the sub-discipline promising to render AI devices more transparent and trustworthy, has also come under scrutiny as part of this discussion. Some experts in the medical AI space debate the reliability of Explainable AI techniques, expressing concerns on their use and inclusion in guidelines and standards. Revisiting such criticisms, this article offers a balanced perspective on the utility of Explainable AI, focusing on the specificity of clinical applications of AI and placing them in the context of healthcare interventions. Against its detractors and despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction and ultimately a useful tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.
Document type Article
Language English
Related publication Why we do need Explainable AI for Healthcare
Published at https://doi.org/10.1186/s41512-025-00209-4
Downloads
s41512-025-00209-4 (Final published version)
Permalink to this page
Back