We are living in a world where we are surrounded by so many intelligent surveillance systems. These surveillance systems capture data about how we live and what we do. One of the biggest complaints regarding surveillance has been its inability to correctly identify different objects or different activity in situations which appear trivial to the human observer. The security industry remains interested in capturing actionable data from intelligent video and sensory surveillance systems and this year we saw increasing interest from non-security industries. Particularly, with the popularity of large-scale sensor and visual surveillance, videos captured by static and dynamic cameras or information sensed by different sensors are required to be automatically analyzed.
For example, convolutional neural networks have demonstrated superiority on modeling high-level visual concepts, while recurrent neural networks have shown promise in modeling temporal dynamics. Human behavior analysis with deep learning is becoming an emerging research area in the field of intelligent surveillance. The goal of this special issue is to call for a coordinated effort to understand the opportunities and challenges emerging in intelligent surveillance with deep learning techniques.
This special issue aims to provide a platform for the research community and professionals to demonstrate solutions and address research challenges in visual and sensory data processing for real time intelligent surveillance systems. Further, diffusion of visual and sensory data for intelligent surveillance systems will open new horizons and research domains that may be of interest to readers and the research community. This special issue aims at publishing high-quality manuscripts covering new research on topics related to implementation of visual and sensory data processing for real time intelligent surveillance systems. The special issue will offer a timely collection of research updates to benefit researchers and practitioners working in the broad area of image and video processing using deep learning.
The topics of interest include, but are not limited to:
- Deep feature learning for surveillance video
- Deep learning to detect faces and objects of interest in surveillance settings
- Deep learning-based face and object recognition
- Object tracking and motion analysis in surveillance settings based on deep learning techniques
- Scene analysis and understanding in the context of deep learning paradigm
- Video summarization and synopsis based on learned prior knowledge using deep learning
- Surveillance information retrieval using deep learning-based features and architectures
- Action, activity, and abnormal activity detection and recognition using deep methods
- Deep learning based human interaction and crowd/group dynamics
- Deep learning approach for activity recognition with focus on sensor data
- Sensor fusion approach for activity and behavior analysis
Dr. Alok Kumar Singh Kushwaha (Lead Guest Editor)Guru Ghasidas University (A Central University), Bilaspur, IndiaEmail: [email protected]
Dr. Om PrakashHemvati Nandan Bahuguna Garhwal University (A Central University), Srinagar (Garhwal), IndiaEmail: [email protected]
Dr. Manish KhareDhirubhai Ambani Institute of Information and Communication technology (DA-IICT)Gandhinagar, Gujarat, IndiaEmail: [email protected]
Dr. Jeonghwan GwakKorea National University of Transportation (KNUT), Chungju, Republic of KoreaEmail: [email protected]
Dr. Nguyen Thanh BinhHo Chi Minh City University of Technology, VNU-HCM, Viet NamEmail: [email protected]
Dr. Ashish KhareUniversity of Allahabad, Prayagraj, Uttar Pradesh, IndiaEmail: [email protected]
Manuscript submission deadline: extended to 30 April, 2021First review notification: 30 June, 2021Revised manuscript submission: 15 September, 2021Final decision: 30 November, 2021
Authors should prepare their manuscript according to the Instructions for Authors available from the Multimedia Tools and Applications website. Authors should submit through the online submission site at https://www.editorialmanager.com/mtap/default.aspx and select “SI 1220 – Visual and Sensory Data Processing for Real Time Intelligent Surveillance System” when they reach the “Article Type” step in the submission process. Submitted papers should present original, unpublished work, relevant to one of the topics of the special issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process.
The special issue will consider papers extending previously published conference papers, provided the journal submission presents a significant contribution beyond the conference paper. Authors must explain in the introduction to the paper the new contribution to the field made by the submission, and the original conference publication should be cited in the text. Note that neither verbatim transfer of large parts of the conference paper nor wholesale reproduction of already published figures is acceptable.