Guest Editors (in alphabetical order)
- Martin Atzmueller, Osnabrueck University, Germany (email@example.com)
- Johannes Fuernkranz, Johannes Kepler University Linz, Austria (firstname.lastname@example.org)
- Tomas Kliegr, Prague University of Economics and Business, Czech Republic (email@example.com)
- Ute Schmid, University of Bamberg, Germany (firstname.lastname@example.org)
Background, Motivation, TopicsRecently, scientific discourse in artificial intelligence and data science has focused on explainable AI (XAI) with respect to algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions. In machine learning, data mining and knowledge discovery, approaches can be classified as white-box and black-box. White-box approaches, such as rule learners and inductive logic programming, result in explicit models which are inherently interpretable. On the other hand, black-box approaches, such as (deep) neural networks, result in opaque models. For this second type of models, over the last years, different approaches for ex-post explanation generation have been proposed. Interpretability and explainability ultimately fosters understandability – relating to one of the classical definitions of knowledge discovery in databases (Fayyad et al., 1996).
Explainable and Interpretable Machine Learning (XI-ML) aims at bringing together research from interpretable and explainable machine learning, also relating to data mining and knowledge discovery. Integrating those areas should enable new perspectives on questions on appropriate learning formalisms, interpretation and explanation techniques, their metrics, as well as the respective assessment options arise.This special issue will provide a leading forum for timely, in-depth presentation of recent advances in explainable and interpretable machine learning and data mining. We aim to tackle these themes from the modeling and learning perspective, targeting interpretable methods and models being able to explain themselves and their output, respectively. Thus, this call covers a wide range of potential topics. We solicit high-quality, original papers describing work on the following (non-exhaustive) list of topics:
* Rule learning for explainable and interpretable machine learning* Interactive learning, explainability in reinforcement learning* Causality of machine learning models* Interpretation of neural networks and ensemble-based methods* Explanation of black box models* Simplifying random forests and other ensemble models* Local pattern mining for explanation* Causal knowledge discovery* Assessment of interpretable and explainable models* Methodologies for measuring explainability of machine learning models* User experiments evaluating effectiveness of explanation algorithms* Interpretability-accuracy trade-off and its benchmarks* Exploiting interactive explanations for learning* Cognitive approaches and human concept learning* Human and algorithmic biases in XAI* Human-centered learning and explanation* Explainability of data visualization and exploration methods such as clustering* Model-agnostic explanation* Case-based explanation* Evaluation metrics* Empirical research on explainability* Regulations and legal aspects of XAI* Use of knowledge graphs in XAI research* Explainability of relational learning* Applications of all of the above (in text classification, disinformation research, information retrieval including recommender systems, legal domain, biology, …)
Submission GuidelinesAuthors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by, other journals.Springer offers authors, editors and reviewers of Data Mining and Knowledge Discovery a web-enabled online manuscript submission and review system. Our online system offers authors the ability to track the review process of their manuscript.Information for Authors can be found at: https://www.springer.com/journal/10618/submission-guidelines
Submit manuscripts to: http://DAMI.edmgr.com.
Indicate the special issue “S.I. Explainable and Interpretable Machine Learning and Data Mining”in the “Additional Information Step”.
- Submission Due: March 31, 2021
- 1st Review Notification: June 30, 2021
- Revision Due: August 31, 2021
- Final Notification: September 30, 2021