Safe and Fair Machine Learning

in Special Issue   Posted on January 25, 2021 

Information for the Special Issue

Submission Deadline: Tue 15 Feb 2022
Journal Impact Factor : 2.672
Journal Name : Machine Learning
Journal Publisher:
Website for the Special Issue: https://www.springer.com/journal/10994/updates/18786592
Journal & Submission Website: https://www.springer.com/journal/10994

Special Issue Call for Papers:

In recent years, safety and fairness have emerged as increasingly relevant topics in machine learning(ML), mainly because ML has also become an important and inseparable part of our daily lives. ML is everywhere: traffic prediction, recommendation systems, marketing analysis, medical diagnosis, autonomous driving, robot control, decision-making support for businesses and even governments make use of ML. ML systems have produced a disruptive change in society, enabling the automation of many tasks by leveraging the huge amount of information available in the Big Data era. For some applications, ML systems have shown impressive capabilities, even outperforming humans.Despite these achievements, the presence of ML in many real-world applications has brought newchallenges related to the trustworthiness of these systems. The potential of these algorithms to cause undesirable behaviors is a growing concern in the ML community, especially when they are integrated in real-world safety-critical systems. Deploying ML in the real world may have dangerous consequences: it has been shown that ML could delay medical diagnoses, cause environmental damage or harm to humans, produce racist, sexist, and other discriminatory behaviours, and even provoke traffic accidents.Moreover, learning algorithms are vulnerable and can be compromised by smart attackers, who can gain a significant advantage by exploiting the weaknesses of ML systems. In the light of these concerns, one key question raises: can we avoid undesirable behaviors and design ML algorithms that behave safely and fairly?This special issue aims to bring together papers outlining the safety and fairness implications of the use of ML in real-world systems, papers proposing methods to detect, prevent and/or alleviate undesired behaviors that ML-based systems might exhibit, papers analyzing the vulnerability of ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to safe and fair ML.Topics of InterestContributions are sought in (but are not limited to) the following topics:

  • Fairness and/or safety in machine learning
  • Safe reinforcement learning
  • Safe robot control
  • Bias in machine learning
  • Adversarial examples in machine learning and defense mechanisms
  • Applications of transparency to safety and fairness in machine learning
  • Verification techniques to ensure safety and robustness
  • Safety and interpretability by having a human in the loop
  • Backdoors in machine learning
  • Transparency in machine learning
  • Robust and risk-sensitive decision making

Contributions must contain new, unpublished, original and fundamental work related to the Machine Learning Journal’s mission. All submissions will be reviewed using rigorous scientific criteria whereby the novelty of the contribution will be crucial.Submission InstructionsSubmit manuscripts to: ​http://MACH.edmgr.com​. Select “SI: ​Safe and Fair Machine Learning​” as thearticle type. Papers must be prepared in accordance with the Journal guidelines:https://www.springer.com/journal/10994/submission-guidelines?IFAKey DatesContinuous submission/review processSubmission deadline​: ​15 November 2021First decision: ​15 January 2022Revision and resubmission deadline: 15 February 2022Paper acceptance: 15 April 2022Camera-ready: 1 May 2022Guest EditorsDana Drachsler Cohen (Technion, Israel Institute of Technology)Javier García (Universidad Carlos III de Madrid)Mohammad Ghavamzadeh (Google Research)Marek Petrik (University of New Hampshire)Philip S. Thomas (University of Massachusetts) 

Closed Special Issues