The ethics and epistemology of explanatory AI in medicine and healthcare

  in Special Issue   Posted on December 4, 2020

Information for the Special Issue

Submission Deadline: Sat 01 May 2021
Journal Impact Factor : 2.068
Journal Name : Ethics and Information Technology
Journal Publisher:
Website for the Special Issue: https://www.springer.com/journal/10676/updates/18649082
Journal & Submission Website: https://www.springer.com/journal/10676

Special Issue Call for Papers:

Guest Editors

Juan Manuel Durán (TU Delft), Martin Sand (TU Delft), Karin R. Jongsma (UMC Utrecht)

Ethics and Information Technology is calling for the submission of papers for a Special Issue focusing on the ethics and epistemology of explainable AI in medicine and healthcare. Modern medicine is now largely implemented and driven by diverse AI systems. While medical AI is assumed to be able to “make medicine human again” (Topol, 2019) by more accurately diagnosing diseases and, thus, freeing doctors to spend more time with their patients, a major issue that emerges with this technology is of explainability, either of the system itself or of its outcome.

In recent debates, it has been claimed that “[for] the medical domain, it is necessary to enable a domain expert to understand why an algorithm came up with a certain result.” (Holzinger et al. 2020). Holzinger and colleagues suggest that being unable to provide explanations for certain automated decisions could have adverse effects on the patients’ trust in those decisions (p. 194). But, does trust really require explanation and, if so, which kind of explanation? Alex John London has forcefully contested the requirement of explainability, suggesting in fact that we are aiming for a standard that cannot be upheld in health care. In this context, several interventions (e.g., treatments) are commonly accepted and applied because they are deemed effective, while we lack an understanding of their underlying causal mechanisms (e.g., Aspirin). Accuracy, as London suggests subsequently, is a more important value for medical AI than explainability (London 2019). Within this junction, the central claim thus remains disputed: Is explainability philosophically and computationally possible? Are there suitable alternatives to explainability (e.g., accuracy)? Does explainability play or should play a role – and if so, which one – in the responsible implementation of AI in medicine and healthcare?

The present Special Issue aims at diving into the heart of this problem, thereby connecting computer science and medical ethics with philosophy of science, philosophy of medicine, and philosophy of technology. All contributions must relate technical and epistemological issues with normative and social problems brought up in connection with the use of AI in medicine and healthcare.

We are particularly interested in contributions that shed a new light on the following questions:

  • Which are the distinctive characteristics of explanations in AI for medicine and healthcare?
  • Which epistemic and normative values (e.g., explainability, accuracy, transparency) should guide the design and use of AI in medicine and healthcare?
  • Does AI in medicine pose particular requirements for explanations? 
  • Is explanatory pluralism a viable option for medical AI (i.e., pluralism of discipline and pluralism of agents receiving/offering explanations)?
  • Which virtues (e.g., social, moral, cultural, cognitive) are at the basis of explainable medical AI?
  • What is the epistemic and normative connection between explanation and understanding?
  • How are trust (e.g., normative and epistemic) and explainability related? 
  • What kind of explanations are required to increase trust in medical decisions?
  • What is the role of transparency in explanations in medical AI?
  • How are accountability and explainability related in medical AI?

Holzinger A, Carrington A, Müller H. Measuring the Quality of Explanations: The System Causability Scale (SCS). KI – Künstliche Intelligenz. 2020;34(2):193-8. doi: 10.1007/s13218-020-00636-z.

London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Center Report. 2019;49(1):15-21. doi: 10.1002/hast.973.

Topol EJ. Deep Medicine – How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019.

Important dates

1st May 2021: submission deadline

End of 2021: Expected time of publication of Special Issue

Papers must be submitted via the online submission system and shall not exceed 8,000 words including references. Submissions will be double-blind refereed for relevance to the theme as well as academic rigor and originality. High quality articles not deemed to be sufficiently relevant to the Special Issue may be considered for publication in a subsequent non-themed issue. Pre-submission inquiries are encouraged and should be directed at the main guest editor Juan Manuel Durán (j.m.duran@tudelft.nl)

Other Special Issues on this journal