Information for the Special Issue
Special Issue Call for Papers:
Zhiyong Yu, Fuzhou University (China), email@example.com
Jiangtao Wang, Lancaster University (UK), firstname.lastname@example.org
Jordán Pascual Espada, University of Oviedo (Spain), email@example.com
Mobile Crowd Sensing (MCS) has emerged as a paradigm for gathering information about the physical world. Using smartphones, networked vehicles and other sensor-rich mobile, portable and wearable devices, massive volumes of data can be gathered, converged and mined to enable applications in intelligent traffic, environmental monitoring, urban planning or public safety management, amongst many others.
The cost of crowdsensed data and the quality of data are two key factors for MCS. There has been a great deal of work to optimize them separately, such as recruiting participants that can meet a task completion requirement, or utilizing data already collected to infer uncollected data with minimal error. However, optimizing them jointly is a promising direction to optimize both cost and quality and is based on active learning which selects the most useful samples and so reduces the number of labeled samples needed. This approach has been successfully used in other research fields including signal and image processing.
In MCS, this is known as ‘Active Crowd Sensing’, and presents several challenges including: the crowdsensed data are spatiotemporally autocorrelated, while traditional active learning assumes the data are independently identically distributed; and that data from different applications require different learning algorithms and so active selection strategies should be based on specific learning algorithms.
This special issue of Personal and Ubiquitous Computing provides an opportunity for researchers and product developers to review and discuss the state-of-the-art in Active Crowd Sensing, to explore novel application areas and demonstrate the benefits of Active Crowd Sensing and to identify open issues sensing and learning in MCS.
Topics may include (but are not limited to):
- Modeling the crowdsensed data cost and learned data quality in MCS
- Participant/location/time selection which can benefit learned data quality
- Missing data inference in Active Crowd Sensing
- Spatiotemporal autocorrelation analysis of crowdsensed data
- Relations between sensing rate and learning rate
- Partitioning between sensing tasks and learning tasks
- Learning directly from incomplete crowdsensed data
- Adaptive measurement matrix for compressive sensing in MCS
- Selective sensing and active learning by edge computing
- Sub-sampling and super-resolution for crowdsensed data
- Sparsity measurement and recoverability of crowdsensed data
- Sparse mobile crowdsensingLearning-assisted optimization in MCS
- Active (deep/transfer) learning in MCS
- Reinforcement/compressive/distilled learning for MCS
- New datasets/features that can be easily accessed and helpful for label inferring
- Problem modeling and framework developing for Active Crowd Sensing
- Novel applications and systems in Active Crowd Sensing
Manuscript Submission: June 15, 2020
Decision Notification: September 15, 2020
Final Manuscript Due Date: October 15, 2020
Publication Date (tentative): 2021
Papers must address one or more of the above issues and should be original and not be under consideration in other publication venues.
All papers will be peer reviewed by at least two independent reviewers in addition to the editors.
Extended versions of conference papers that are already published may be considered as long as the additional contribution is at least 30% new or additional content from the original.
Authors must follow the formatting and submission instructions of the Personal and Ubiuitous Computing journal at https://www.springer.com/journal/779.
During the first step in the submission system Editorial Manager, please select “Original article” as article type. In further steps please confirm that your submission belongs to a special issue and choose from the drop-down menu the appropriate special issue title.