WAISE 2019 : Second International Workshop on Artificial Intelligence Safety Engineering

  in Conferences   Posted on April 13, 2019

Conference Information

Submission Deadline Monday 13 May 2019
Conference & Submission Link https://www.waise.org/
Conference Dates Sep 10, 2019 - Sep 10, 2019
Conference Address Turku, Finland
Proceedings indexed by
Conference Organizers : ( Deadline extended ? Click here to edit )

Conference Call for Papers

==================================================

Call for Contributions

Second International Workshop on Artificial Intelligence Safety Engineering (WAISE2019), associated to SAFECOMP

Turku, Finland, September 10, 2019

Submission Deadline: May 13, 2019

https://www.waise.org

==================================================

SCOPE

———

Research, engineering and regulatory frameworks are needed to achieve the full potential of *Artificial Intelligence (AI)* because they will guarantee a standard level of safety and settle issues such as compliance with ethical standards and liability for accidents involving, for example, autonomous cars. Designing AI-based systems for operation in proximity to and/or in collaboration with humans implies that current safety engineering and legal mechanisms need to be revisited to ensure that individuals –and their properties– are not harmed and that the desired benefits outweigh the potential unintended consequences.

The different approaches taken to AI safety go from pure theoretical (moral philosophy or ethics) to pure practical (engineering). Making progress with developing safe AI-based systems requires to combine philosophy and theoretical science with applied science and engineering. This should become an interdisciplinary approach covering technical (engineering) aspects of how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as broader strategic, ethical and policy issues.

Increasing levels of AI in “smart” sensory-motor loops allow intelligent systems to perform in increasingly dynamic uncertain complex environments with increasing degrees of autonomy, with human being progressively ruled out from the control loop. Adaptation to the environment is being achieved by Machine Learning (ML) methods rather than more traditional engineering approaches, such as system modelling and programming. Recently, certain ML methods are proving themselves very promising, such as deep learning, reinforcement learning and their combination. However, the inscrutability or opaqueness of the statistical models for perception and decision-making we build through them pose yet another challenge. Moreover, the combination of autonomy and inscrutability in these AI-based systems is particularly challenging in safety-critical applications, such as autonomous vehicles, personal care or assistive robots and collaborative industrial robots.

The WAISE workshop is intended to explore new ideas on safety engineering for AI-based systems, ethically aligned design, regulation and standards for AI-based systems. In particular, WAISE will provide a forum for thematic presentations and in-depth discussions about safe AI architectures, ML safety, safe human-machine interaction, bounded morality and safety considerations in automated decision making systems, in a way that makes AI-based systems more trustworthy, accountable and ethically aligned.

WAISE aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.

TOPICS

———

Contributions are sought in (but are not limited to) the following topics:

* Regulating AI-based systems: safety standards and certification

* Safety in AI-based system architectures: safety by design

* Runtime AI safety monitoring and adaptation

* Safe machine learning and meta-learning

* Safety constraints and rules in decision making systems

* AI-based system predictability

* Continuous Verification and Validation of safety properties

* Avoiding negative side effects

* Algorithmic bias and AI discrimination

* Model-based engineering approaches to AI safety

* Ethically aligned design of AI-based systems

* Machine-readable representations of ethical principles and rules

* Uncertainty in AI

* Accountability, responsibility and liability of AI-based systems

* AI safety risk assessment and reduction

* Confidence, self-esteem and the distributional shift problem

* Reward hacking and training corruption

* Self-explanation, self-criticism and the transparency problem

* Safety in the exploration vs exploitation dilemma

* Simulation for safe exploration and training

* Human-machine interaction safety

* AI applied to safety engineering

* Algorithmic bias and AI discrimination

* AI safety education and awareness

* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

IMPORTANT DATES

———

* Paper submission: May 13, 2019

* Notification of acceptance: June 3, 2019

* Camera-ready submission: June 13, 2019

FORMAT

——

To deliver a truly memorable event, we will follow a highly interactive format that includes invited talks and thematic sessions. The thematic sessions will be structured into short pitches and a common panel slot to discuss both individual paper contributions and shared topic issues. Three specific roles are part of this format: session chairs, presenters and paper discussants. The workshop will be organized as a single-day meeting.

Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

SUBMISSION AND SELECTION

———

You are invited to submit full scientific contributions (max. 12 pages), short position papers (max. 6 pages) or proposals of technical talk/sessions (short abstracts, max. 2 pages).

Manuscripts must be submitted as PDF files via EasyChair online submission system:

https://easychair.org/conferences/?conf=waise2019

Please keep your paper format according to Springer LNCS formatting guidelines (single-column format). The Springer author kit can be downloaded from: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines

Workshop proceedings will be provided as complementary book to the SAFECOMP Proceedings in Springer LNCS.

Papers will be peer-reviewed by the Program Committee (minimum 3 reviewers per paper).

The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.

For any questions, please send an email to: waise2019@easychair.org

COMMITTEES

———

Organization Committee

* Zakaria Chihani, CEA LIST, France

* Simos Gerasimou, University of York, UK

* Guillaume Charpiat, INRIA, France

* Andreas Theodorou, Umeå University, Sweden

Steering Committee

* Huascar Espinoza, CEA LIST, France

* Rob Alexander, University of York, UK

* Stuart Russell, UC Berkeley, USA

* Raja Chatila, ISIR – Sorbonne University, France

* Nozha Boujemaa, DATAIA Institute & INRIA, France

* Virginia Dignum, Umea University, Sweden

* Philip Koopman, Carnegie Mellon University, USA

Programme Committee (look at the website: http://www.waise.org)

Other Conferences in Finland

NoDaLiDa 2019 : The 22nd Nordic Conference on Computational Linguistics

Deadline :
Fri 31 May 2019
Sep 30, 2019 - Oct 2, 2019 - Turku
Finland

holistic : 5th World Holistic Nursing Conference

Deadline :
Sun 02 Jun 2019
Jun 10, 2019 - Jun 11, 2019 - Helsinki
Finland

holistic : 5th World Holistic Nursing Conference

Deadline :
Sun 02 Jun 2019
Jun 10, 2019 - Jun 11, 2019 - Helsinki
Finland

NoDaLiDa 2019 : [FNP 2019] Second Financial Narrative Processing Workshop

Deadline :
Sat 13 Jul 2019
Apr 15, 2019 - Jul 13, 2019 - Turku
Finland

IEEE FRUCT 2019 : 25th Conference of Open Innovations Association FRUCT

Deadline :
Mon 16 Sep 2019
Nov 5, 2019 - Nov 8, 2019 - Helsinki
Finland