1. Summary and Scope
Recent years have witnessed the proliferation of mobile computing and Internet-of-Things (IoT), where billions of mobile and IoT devices are connected to the Internet, generating zillions Bytes of data at the network edge. Driving by this trend and the development of 5G, edge computing, an emerging computing paradigm, has received a tremendous amount of interest. By pushing data storage, computing, and controls closer to the network edge, edge computing has been widely recognized as a promising solution to meet the requirements of low latency, high scalability and energy efficiency. In the meanwhile, with the development of neural networks, Artificial Intelligence (AI) has been applied to a variety of disciplines and proved highly successful in a vast class of intelligent applications cross many domains.
Recently, edge intelligence, aiming to facilitate the deployment of neural networks on edge computing, has received significant attention. However there are many challenges existing for a novel design of edge computing architecture to AI applications, and their co-optimization. For instance, conventional neural networks techniques usually entail powerful computing facilities (e.g., cloud computing platforms), while the entities at the edge may have only limited resources for computations and communications. This suggests that AI algorithms should be revisited for edge computing to AI models into the edge device for efficient processing. On the other hand, the adapted deployments of neural networks at the edge empower the efficient learning systems that can provide the “smartification” across different layers, e.g., from network communications to applications, and also involve collaborations across edge to cloud. Finally, designing algorithms for small-scale edge devices in a learning ambience is all the challenging as there are several conflicting issues to account for. These include, memory management, power management, and compute capability of a node, etc.
In this special issue, we solicit original work on ML/AI, specifically catered to deep neural networks on/for edge computing, and efficient learning systems on edge computing, addressing specific challenges in this field. The list of possible topics includes, but not limited to:
Neuromorphic computing challenges on Edge devices
Intelligent Edge Computing Devices for neurocomputing applications
Spiking Netural Networks on Edge devices – low-power and memory bandwidth challengesConventional Neural Networks algorithms on edge computing
Neurocomputing Algorithms on edge devices for Wearables;
Edge/Fog-infused Cloud architectures for ML/AI applications
Efficient Artificial intelligence algorithms on edge computing
Few-shot learning on edge devices for ML/AI applications
Resource and data management for edge intelligence
AI/ML for small-scale low-power edge devices
Distributed and cooperative learning with edge devices on Cloud
Applications of edge intelligence & neurocomputing
5G-enabled services for edge intelligence & neurocomputing
System architectures of edge based neurocomputing
Architecture & application of Edge AI for IoT
Security & privacy for edge computing
Attack mitigation in edge computing
2. Submission Guidelines
Authors should prepare their manuscripts according to the “Instructions for Authors” guidelines of “Neurocomputing” outlined at the journal website https://www.elsevier.com/journals/neurocomputing. Authors need to explicitly identify an appropriate closely matching topic from the list specified above in their cover letters during submission. All papers will be peer-reviewed following a regular reviewing procedure. Each submission should clearly demonstrate evidence of benefits to society or large communities. Originality and impact on society, in combination with a media-related focus and innovative technical aspects of the proposed solutions will be the major evaluation criteria.
3. Important Dates
Submission Deadline: 30th September 2020
First Review Decision: 31st December 2020
Revisions Due: 31st January 2021
Decision on the Accepted manuscripts: 28th February 2021
Expected publication date: 30th April 2021
4. Guest Editors
Dr. Zeng Zeng
Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR)
Google Scholar: https://scholar.google.com.sg/citations?hl=zh-CN&user=ztBsejkAAAAJ&view_op=list_works&sortby=pubdate
Zeng Zeng is an IEEE Senior Member. He received the Ph.D. degree in electrical and computer engineering from the National University of Singapore, Singapore. Currently, he works as Senior Scientist, Program Head, I2R, A*Star, Singapore. From 2011 to 2014, he worked as a Senior Research Fellow with the National University of Singapore. From 2005 to 2011, he worked as a professor in Computer and Communication School, Hunan University, China. His research interests include distributed/parallel computing systems, data stream analysis, deep learning, multimedia storage systems, wireless sensor networks.
Dr. Cen Chen
Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR)
Google Scholar: https://scholar.google.com.sg/citations?user=BIQ_I9wAAAAJ&hl=zh-EN
Cen Chen received his PhD degree in Computer Science, Hunan University, China. Currently, he works as a Scientist II in Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore. His research
interest includes parallel and distributed computing, machine learning and deep learning. He has published several research articles in international conference and journals of machine learning algorithms and parallel computing, such as IEEE-TC, IEEETPDS, AAAI, ICDM, ICPP, and many more.
A/P Prof. Bharadwaj Veeravalli
Department of ECE,
Faculty of Engineering, NUS, Singapore
Google Scholar: https://scholar.google.com.sg/citations?user=IqAJttsAAAAJ
Bharadwaj Veeravalli, Senior Member, IEEE & IEEE-CS, received the BSc degree in physics, from Madurai-Kamaraj University, India, in 1987, the Master’s degree in Electrical Communication Engineering from the Indian Institute of Science(IISc), Bangalore, India in 1991, and the PhD degree from the Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India, in 1994. He received gold medals for his bachelor degree overall performance and for an outstanding PhD thesis (IISc, Bangalore India) in the years 1987 and 1994, respectively. He is currently with the Department of Electrical and Computer Engineering, Communications and Information Engineering (CIE) division, at The National University of Singapore, Singapore, as a tenured Associate Professor. His main stream research interests include cloud/grid/cluster computing (big data processing, analytics and resource allocation), scheduling in parallel and distributed systems, Cybersecurity, and multimedia computing. He is one of the earliest researchers in the field of Divisible Load Theory (DLT). He is currently serving the editorial board of IEEE Transactions on Parallel & Distributed Systems as an Associate Editor. He is a senior member of the IEEE and the IEEE-CS. He can be contacted via: firstname.lastname@example.org
Prof. Keqin Li
UNY Distinguished Professor, USA
Google Scholar: https://scholar.google.com/citations?user=x0YtT7QAAAAJ&hl=en
Professor Keqin Li’s an IEEE Fellow, his current research interests include big data, computer architecture, cloud computing, fog computing and mobile edge computing, energy-efficient computing and communication, embedded systems and cyber-physical systems, heterogeneous computing systems, big data computing, high-performance computing, CPU-GPU hybrid and cooperative computing, computer architectures and systems, computer networking, machine learning, intelligent and soft computing. He has published over 680 journal articles, book chapters, and refereed conference papers, and has received several best paper awards.
Dr. Joey Tianyi Zhou
Institute of High Performance Computing,Agency for Science, Technology and Research (A*STAR)
Google Scholar: https://scholar.google.com/citations?user=cYNqDokAAAAJ&hl=en
Joey Zhou is currently a scientist, PI and group manager with the Institute of High Performance Computing (IHPC) in Agency for Science, Technology, and Research (A*STAR), Singapore. He is leading the AI Group with more than 30 research staff members. Before working in IHPC, he was a senior research engineer with SONY US Research Center in San Jose, USA. Joey Zhou received a Ph.D. degree in computer science from Nanyang Technological University (NTU), Singapore. His current interests mainly focus on machine learning and their applications in natural language processing and computer vision tasks. On these areas, he has co/authored more than 50 articles and received the Best Poster Award Honorable Mention at Asian Conference on Machine Learning (ACML’12), Best Poster Award at HANDS workshop on International Conference on Computer Vision (ICCV’19), and Best Paper Award at BeyondLabeler workshop on International Joint Conference on Artificial Intelligence (IJCAI’16), BOOM workshop on IJCAI’19, and Best Student Paper Nomination at European Conference on Computer Vision (ECCV’16).
Dr. Zhou co-organized ACML 2016 Learning on Big Data workshop and IJCAI 2019 Multi-output Learning workshop, ICDCS 2020 Efficient AI workshop; has served as an Associate/Guest Editor for IEEE Access, IET Image Processing, IEEE Multimedia, and ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), and TPC Chair in Mobimedia 2020; and received NIPS Best Reviewer Award in 2017.