Robust Recognition Systems against Adversarial Attacks

  in Special Issue   Posted on May 8, 2020

Information for the Special Issue

Submission Deadline: Wed 15 Jul 2020
Journal Impact Factor : 5.910
Journal Name : Information Sciences
Journal Publisher:
Website for the Special Issue: https://www.journals.elsevier.com/information-sciences/call-for-papers/robust-recognition-systems-against-adversarial-attacks
Journal & Submission Website: https://www.journals.elsevier.com/information-sciences

Special Issue Call for Papers:

1. Scope:

The recognition systems can be inevitably affected by noisy or polluted data caused by accidental outliers, transmission loss, or even adversarial attacks. Unlike random noise with low corruption ratio, the adversarial attacks can be arbitrary, unbounded and do not follow any specific distribution. Most existing recognition systems are highly vulnerable to adversarial examples, i.e., samples of input data modified very slightly to fool classifiers or other models. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the system still makes a mistake, even if the adversary has no access to the underlying system. A single incorrect inference would be expensive for recognition systems related to privacy or security, such as biometric recognition and autonomous vehicle. Therefore, there is a need to analyze adversarial phenomenon in computer vision field, and thus enhance the robustness of recognition system. Though booming recently, there are so many challenges lie in robust recognition system. Reasons of adversarial vulnerability need more investigation. More reasonable evaluation criterions on robustness of deep neural networks are needed. The transferability of adversarial example has yet been well explained and exploited in existing research.

This special issue serves as a forum for researchers all over the world to discuss their efforts and recent advances in robust recognition system and its applications. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Especially, to provide readers of the special issue with a state-of-the-art background on the topic, we will invite one survey paper, which will undergo peer review. This special issue seeks to present and highlight the latest developments on practical methods of robust recognition systems via adversarial defense methods. Papers addressing interesting real-world applications are especially encouraged.

Topics of interest include, but are not limited to,

  • Theoretical analysis on vulnerability of recognition systems
  • Existence and transferability of adversarial examples
  • Adversarial robustness of recognition systems
  • Adversarial training against adversarial examples
  • Evaluation approaches for robustness to adversarial examples
  • Defense against transfer-based attack toward computer vision and recognition
  • Robust system verification against adversarial attacks
  • Robust optimization methods for training models
  • Data denoising methods on deep neural networks
  • Interpretability of adversarial attacks and defenses
  • Multimodal recognition systems as a defense
  • Real-world applications of robust recognition systems and adversarial defenses, e.g., design of robust adversarial detectors, adversarial patches in computer vision applications, and physical defenses etc.

2. Submission Guideline

Authors should prepare their manuscript according to the Guide for Authors from the page of Information Sciences (https://www.journals.elsevier.com/information-sciences). All the papers will be peer-reviewed following the Information Sciences reviewing procedures.

When submitting papers, please select Article Type as “Adversarial Robustness” The EES website is located at http://ees.elsevier.com/ins/
 

3. Important Dates

  • Paper Submission: July 15, 2020
  • Notification of Acceptance: Nov. 1, 2020
  • Final Manuscript Due: Dec. 15, 2020

4. Guest Editors

Closed Special Issues