1D Self-Attention Network for Point Cloud Semantic Segmentation using Omnidirectional LiDAR
- Takahiro Suzuki, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
- Asian Conference on Pattern Recognition, 2021
Download: PDF (English)
Understanding the environment around a vehicle is essential for automated driving technology. For this purpose, omnidirectional LiDAR is used for obtaining surrounding information, and point cloud- based semantic segmentation methods have been proposed. However, these methods require time to acquire point cloud data and to process the point cloud, which causes a significant positional shift of objects in practical application scenarios. In this paper, we propose a 1D self-attention network (1D-SAN) for LiDAR-based point-cloud semantic segmentation, which is based on a 1D-CNN for real-time pedestrian detection of omnidirectional LiDAR data. Because the proposed method can sequentially process segmentation during data acquisition with omnidirectional LiDAR, we can reduce the processing time and suppress positional shift. Moreover, for improving segmentation accuracy, we use the intensity as input data and introduce a self-attention mechanism into the method. The intensity enables us to consider object texture. The self-attention mechanism can consider the relationship between point clouds. Experimental results with the SemanticKITTI dataset show that the intensity input and the self-attention mechanism in the proposed method improve accuracy. In particular, the mechanism contributes to improving the accuracy for small objects. Also, we show that the processing time of the proposed method is faster than the other point-cloud segmentation methods.