Robust Machine Learning

in Special Issue   Posted on June 3, 2020 

Information for the Special Issue

Submission Deadline: Mon 17 Aug 2020
Journal Impact Factor : 2.672
Journal Name : Machine Learning
Journal Publisher:
Website for the Special Issue:
Journal & Submission Website:

Special Issue Call for Papers:

Call for Papers

Machine learning approaches are currently deployed in a variety of systems, most of which are low-risk decision systems. We increasingly see the need for incorporating machine learning based solutions in high-stakes applications, requiring algorithms and tools that meet high robustness requirements in high-cost industrial processes or safety-critical systems (such as power plant or aircraft operations).

The Machine Learning journal invites submissions to a special issue on “Robust Machine Learning.”

Regulations and industry standards often require specific levels of safety and reliability to be met. Many prescribe the robust detection of failing components to reduce collateral damage to other healthy subsystems together with assurance for the functioning of the systems within certain working parameters. However, only a small number of machine learning approaches are capable of considering these additional requirements.  Assuring robustness for machine learning components is challenging because of the way it generalizes from training data to previously unseen situations. It is both infeasible and undesirable to specify the desired output for every possible situation, but we still need to provide certain assurances about the generalization of the behavior of the machine learning algorithms. Recent advances have shown that it can be feasible to prove the robustness of learned models, including deep neural networks.

This special issue is devoted to exploring the emerging research questions in robust machine learning and on robustness metrics. Learned models must generalize and operate in previously unseen regions of the input space, therefore, robustness research should focus on approaches that estimate how machine learning approaches handle previously unmodeled phenomena – the unknown unknowns – and whether the risk associated with decisions in those spaces can be bounded. Learning self-competence functions (derived, for example, from confidence estimates, such as a reject option) is an example of such approaches.

Topics of interest include but are not limited to:

  • ML robustness metrics
  • Fundamental ideas for ML robustness with unmodeled phenomena
  • Proving the adversarial robustness of deep neural networks and other learned models
  • Verification of neural networks
  • Robust optimization for assurance on learned model errors
  • Robustness related to speed and performance issues that arise when learning is embedded in a complex system that has real-time or near-real-time performance requirements
  • Robust Interactive human-in-the-loop decision systems
  • Beyond robust prediction, into robust actions and control
  • Making systems and their decisions robust to machine learning
  • Addressing ML-specific large system robustness issues. The robust functioning of systems with ML components is often dependent on unspecified properties of the ML components.
  • Designing ML systems for feedback loops
  • ML debugging
  • Engineering for robust ML
  • Robustness approaches that rely on a portfolio of models


November 17, 2020: DECISIONS
February 1, 2021: CAMERA READY PAPERS


Daniel Fremont – University of California, Santa Cruz

Mykel Kochenderfer – Stanford University

Alessio Lomuscio – Imperial College London

Dragos Margineantu – Boeing Research & Technology

Cheng Soon Ong – CSIRO, Data 61

Other Special Issues on this journal

Closed Special Issues