New Techniques in Adversarial Machine Learning

  in Special Issue   Posted on March 28, 2020

Information for the Special Issue

Submission Deadline: Tue 20 Oct 2020
Journal Impact Factor : 3.541
Journal Name : Applied Soft Computing
Journal Publisher:
Website for the Special Issue: https://www.journals.elsevier.com/applied-soft-computing//call-for-papers/new-techniques-in-adversarial-machine-learning
Journal & Submission Website: https://www.journals.elsevier.com/applied-soft-computing/

Special Issue Call for Papers:

Guest Editors

Jin Li, E-mail: lijin@gzhu.edu.cn (Managing Guest Editor)
Guangzhou University, Guangzhou, China
Peng Cheng Laboratory, Shenzhen, China

Witold Pedrycz, E-mail: wpedrycz@ualberta.ca
University of Alberta, Edmonton, AB, Canada.

Changyu Dong, E-mail: changyu.dong@newcastle.ac.uk
Newcastle University, UK

With the rapid development of data science, machine learning has been widely applied to many important fields such as computer vision, healthcare systems, and financial predictions, to support the design of constructs of Artificial Intelligence.

However, the environment of AI and machine learning is adversarial, in which machine learning tasks are faced with a variety of essential security threats coming from multiple parties. In fact, a malicious adversary can carefully manipulate input data, learning procedures, and outputs by exploiting specific vulnerabilities of learning tasks to compromise the security of machine learning systems. In terms of adversaries’ goals, the threats can be divided into three main categories: security violation, attack specificity, and error specificity. Examples include: 1) aiming to evade detection without compromising normal system operation; 2) aiming to cause misclassification of a specific set of samples, or of any sample; 3) aiming to have a sample misclassified as a specific class, or any of the classes different from the true class.

To understand the security properties of Adversarial Machine Learning (AML), one should address the following main issues: 1) Identifying potential vulnerabilities of machine learning algorithms during learning, and classification; 2) Devising appropriate attacks that correspond to the identified threats and evaluating their impact on the targeted system; 3) Proposing countermeasures to improve the security of machine learning algorithms against the considered attacks.

This feature topic will benefit the research community towards identifying challenges and disseminating the latest methodologies and solutions to adversarial machine learning. The ultimate objective is to publish high-quality articles presenting open issues, delivering algorithms, protocols, frameworks, and solutions. All received submissions will be sent out for peer review by at least three experts in the field and evaluated with respect to relevance to the special section, level of innovation, depth of contributions, and quality of presentation. Case studies, which address state-of-art research and state-of-practice industry experiences, are also welcome. Guest Editors will make an initial determination of the suitability and scope of all submissions. Papers that either lack originality, clarity in presentation or fall outside the scope of the special issue will not be sent for review and the authors will be promptly notified in such cases. Submitted papers must not be under consideration by any other journal or publication

Topics

Topics of interest include, but are not limited to, the following:

  • Dependable Machine Learning Algorithms in Adversarial Setting
  • Secure Federated Machine Learning against Malicious Attacker
  • Security Evaluation for Federated Machine Learning
  • Privacy Disclosure in Traditional Machine Learning Algorithms
  • Adversarial Sample and Its Detection Method for Natural Language Processing
  • Adversarial Sample and Its Detection Method for Images Recognition
  • Malware Code Manipulation against Detection based on Machine Learning
  • Verification Mechanism for Data-outsourced Machine Learning
  • Explanation Methods for Machine Learning Application

Important Date

Submission Starting: Feb. 20th, 2020

Notification Date: 3-4 months

Submission Closing: Oct. 20th, 2020

We will broadcast the SI CFP information in many ways, including the Applied Soft Computing website and mail lists etc. The number of submissions is expected over 100 submissions. We plan to accept around 15-25 finally, with acceptance rate 15%.

Highlight of the Topic

Machine-learning-based artificial intelligent applications have been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions, which certainly poses security threats to the applications. These include, e.g., manipulation of malware code to have the corresponding sample misclassified as legitimate, or manipulation of images to mislead object recognition. The vulnerability of machine learning in such an adversarial environment, along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. Therefore, this particular topic will benefit the readers towards identifying challenges and disseminating the latest methodologies and solutions to adversarial machine learning.

Submission Instructions

Paper submissions for this special issue should follow the submission format and guidelines for regular papers at the Applied Soft Computing website: http://www.journals.elsevier.com/applied-soft-computing/. Authors’ Manuscript should be submitted online at EES system: http://ees.elsevier.com/asoc/. Please select the special issue name in EES system with the article type as “SI: Techniques in Adversarial ML”.

Other Special Issues on this journal

Closed Special Issues