Dept. of Robotics Science and Technology,
Chubu University

Deep Learning Conference

Potential Risk Localization via Weak Labeling out of Blind Spot

Author
Kota Shimomura, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Publication
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024

Download: PDF (English)

Achieving fully autonomous driving requires not only understanding the current surrounding conditions but also predicting how objects that could lead to potential risks may change in the future. Predicting potential risk regions especially where pedestrians or vehicles might suddenly appear is crucial for safe autonomous driving and accident avoidance. Constructing datasets annotated with potential risk regions is costly. Therefore conventional methods have proposed blind spot estimation using depth maps or segmentation masks through automatic labeling. However these methods are limited in applicability due to their reliance on camera parameters or point clouds. In this study we propose a method to automatically generate labels from depth maps and segmentation masks and estimate potential risk regions in 2D. Our automatic labeling algorithm relies solely on images making it applicable to all onboard camera datasets. To demonstrate the effectiveness of our approach we define regions where pedestrians or vehicles might emerge from blind spots as potential risk regions and annotate them to create a new dataset extended with potential risk region annotations. Experiments using the Cityscapes Dataset show that weakly training with labels generated by our proposed method achieves equal or superior accuracy compared with supervised training with manually annotated ground truth (GT). Furthermore experiments using the Mapillary Vistas Dataset and BDD100K Dataset demonstrate the versatility of our approach.

Previous Next