For self-driving cars and any autonomous, intelligent robot system, reliable pose information and reliable perception of the environment is of utter importance. Without them, a self-driving car could not start its journey and the mobile robot system could not commence any intelligent mission. To calculate robust localisation data, the robot needs to integrate several different sensor sources into its localisation pose to cope with sensor noise and come up with a consistent pose estimate. To detect obstacles in all weather conditions, a self-driving car uses various sensor systems, such as cameras, LiDAR and radar, but none of them is perfect on its own. Therefore, only fusing the sensor data will ensure a robust perception of the situation and the environment.
With this Special Issue we aim to collect current work on sensor fusion in the field of self-driving cars and autonomous robots. We welcome work on successful application examples of sensor fusion in these fields, but we also invite works on improving the theory or technology of state estimation, which focus on these particular application domains. The Special Issue is focused on, but not limited to, original work on:
Novel sensor fusion techniques;
Novel sensor technologies;
Success stories in sensor application;
Innovative processing of sensor data;
Neural networks and their training, e.g., for object detection;
Advantages in vehicle navigation based on sensor improvements;
Mapping based on fused sensor data;
Connected vehicles for a common perception;
Realistic sensor simulation;
Semantic segmentation and data annotation.