機械知覚&ロボティクスグループ
中部大学

Deep Learning 国際会議

Simultaneous Visual Context-aware Path Prediction

Author
Haruka Iesaki, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi, Yasunori Ishii, Kazuki Kozuka, Ryota Fujimura
Publication
International Conference on Computer Vision Theory and Applications, 2020

Download: PDF (English)

Autonomous cars need to understand the environment around it to avoid accidents. Moving objects like pedestrians and cyclists affect to the decisions of driving direction and behavior. And pedestrian is not always one-person. Therefore, we must know simultaneously how many people is in around environment. Thus, path prediction should be understanding the current state. For solving this problem, we propose path prediction method consider the moving context obtained by dashcams. Conventional methods receive the surrounding environment and positions, and output probability values. On the other hand, our approach predicts probabilistic paths by using visual information. Our method is an encoder-predictor model based on convolutional long short-term memory (ConvLSTM). ConvLSTM extracts visual information from object coordinates and images. We examine two types of images as input and two types of model. These images are related to people context, which is made from trimmed people’s positions and uncaptured background. Two types of model are recursively or not recursively decoder inputs. These models differ in decoder inputs because future images cannot obtain. Our results show visual context includes useful information and provides better prediction results than using only coordinates. Moreover, we show our method can easily extend to predict multi-person simultaneously.

前の研究 次の研究