Advances in Explainable (XAI) and Responsible (RAI) Artificial Intelligence

  in Special Issue   Posted on May 28, 2020

Information for the Special Issue

Submission Deadline: Tue 15 Dec 2020
Journal Impact Factor : 5.667
Journal Name : Information Fusion
Journal Publisher:
Website for the Special Issue: https://www.journals.elsevier.com/information-fusion/call-for-papers/multi-sensor-multi-source-information-fusion
Journal & Submission Website: https://www.journals.elsevier.com/information-fusion

Special Issue Call for Papers:

In the last few years, the interest in deriving complex AI models capable of achieving unprecedented levels of performance has been progressively displaced by a growing concern with alternative design factors, aimed at making such models more usable in practice. Indeed, in a manifold of applications complex AI models become of limited or even null practical utility. The reason lies on the fact that AI models are often designed with only performance as their design target, thus leaving aside other important aspects such as privacy awareness, transparency, confidence, fairness or accountability. Remarkably, all these aspects have acquired a great momentum in the Artificial Intelligence community, giving rise to exclusive sections devoted to all these concepts in prospective studies and reports delivered at the highest international levels (see e.g. “Ethics Guidelines for Trustworthy Artificial Intelligence”, by the High-Level Expert Group on AI, April 2019).

In this context, Explainable AI (XAI) refers to those Artificial Intelligence techniques aimed at explaining, to a given audience, the details or reasons by which a model produces its output [1]. To this end, XAI borrows concepts from philosophy, cognitive sciences and social psychology to yield a spectrum of methodological approaches that can provide explainable decisions for users without a strong background on Artificial Intelligence. Therefore, XAI targets at bridging the gap between the complexity of the model to be explained, and the cognitive skills of the audience for which explainability is sought. Interdisciplinary XAI methods have so far embraced assorted elements from multiple disciplines, including signal processing, adversarial learning, visual analytics or cognitive modeling, to mention a few. Although reported XAI advances have risen sharply in recent times, there is global consensus around the need for further studies around the explainability of ML models. A major focus has been placed on XAI developments that involve the human in the loop and thereby, become human-centric. This includes the automated generation of counterfactuals, neuro-symbolic reasoning, or fuzzy rule-based systems, among others.

A step beyond XAI is Responsible AI (RAI), which denotes a set of principles to be met when deploying AI-based systems in practical scenarios: Fairness, Explainability, Human-Centric, Privacy Awareness, Accountability, Safety and Security. Therefore, RAI extends further XAI by ensuring that other critical modeling aspects are taken into account when deploying AI-based systems in practice, including not only algorithmic proposals but also new procedures devoted to ensuring responsibility in the application and usage of AI models, including tools for accountability and data

governance, methods to assess and explain the impact of decisions made by AI models, or techniques to detect, counteract or mitigate the effect of bias on the model’s output. It is only by carefully accounting for all these aspects when humans, through all processes and systems endowed with AI-based functionalities (e.g. Robotics, Machine Learning, Optimization and Reasoning), will fully trust and welcome the arrival of this technology.

This special issue seeks original works and fresh studies dealing with research findings on XAI and RAI. The list of topics in this special issue include, but is not limited to:

XAI:

  • Post-hoc explainability techniques for AI models
  • Neural-symbolic reasoning to explain AI models
  • Fuzzy Rule-based Systems for explaining AI models
  • Counterfactual explanations of AI models
  • Explainability and data fusion
  • Knowledge representation for XAI
  • Human-centric XAI
  • Visual explanations for AI models
  • Contrastive explanation methods for XAI
  • Natural Language generation for XAI
  • Interpretability of other ML tasks (e.g. ranking, recommendation, reinforcement learning)
  • Hybrid transparent-Blackbox modeling
  • Quantitative evaluation (metrics) of the interpretability of AI models
  • Quantitative evaluation (metrics) of the quality of explanations
  • XAI and theory-guided data science

RAI:

  • Privacy-aware methods for AI models
  • Accountability of decisions made by AI models
  • Bias and fairness in AI models
  • Methodology for an ethical and responsible use of AI based models
  • AI models’ output confidence estimation
  • Adversarial analysis for AI security (attack detection, explanation and defense)
  • Causal reasoning, causal explanations, and causal inference

[1] A. Barredo Arrieta, N. Diaz-Rodriguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, vol. 58, pp. 82-115, June 2020.

Please prepare your paper along with all the supplementary materials for your submission. The papers submitted to this special issue must be original. Besides that, they must not be published, “under review”, or even be submitted in any other journal, conference, or workshop. Papers will be peer-reviewed by at least three independent reviewers and will be chosen based on contributions including their originality, scientific quality as well as their suitability to this special issue. The journal editors will make the final decision on which papers will be accepted.

Authors must ensure that you carefully read the guide for authors before submitting your papers. The guide for authors and link for online submission is available on the Information Fusion homepage at: https://www.journals.elsevier.com/information-fusion. Please select “SI:XAIRAI” when you reach the “Article Type” step when submitting your papers. For any question regarding this special issue, authors may contact directly via email to Javier Del Ser at javier.delser@tecnalia.com.

Guest Editor(s)

Prof. Javier Del Ser, TECNALIA, Spain / University of the Basque Country (UPV/EHU), Spain
Email: javier.delser@tecnalia.com

Prof. Natalia Diaz-Rodriguez, ENSTA, Institute Polytechnique Paris, France / INRIA Flowers Team, Palaiseau, France
Email: natalia.diaz@ensta-paristech.fr

Prof. Andreas Holzinger, Medical University Graz, Austria/ University of Alberta, Canada
Email: a.holzinger@human-centered.ai

Prof. Francisco Herrera, University of Granada, Spain
Email: herrera@decsai.ugr.es

Deadline for Submission: December 15th, 2020

Other Special Issues on this journal

Closed Special Issues