機械知覚&ロボティクスグループ
中部大学

Deep Learning 国際会議

Human-like Guidance with Gaze Estimation and Classification-based Text Generation

Author
Masaki Nambata, Kota Shimomura, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Publication
International Conference on Intelligent Transportation Systems, 2023

Download: PDF (English)

Car navigation systems are widely used and essential for driving assistance. However, drivers often struggle to understand voice guidance from these systems, leading to the need for constant map checking on a monitor, which can be dangerous. In contrast, human guidance utilizing visible objects is clearer to drivers. Human-like Guidance (HlG) is a task that realizes such human-like navigation on a system. In this paper, we propose a novel method for HlG. Our approach involves defining human-like navigation templates and selecting appropriate sentences for each object in an intersection scene. We also construct a model to estimate the driver’s gaze and use this information to choose a reference object for navigation, resulting in a system that provides clear guidance to the driver. Furthermore, we provide a gaze information dataset called the Driving Gaze Dataset to build a driver gaze estimation model. Through experiments using the CARLA automated driving simulator, we demonstrate the feasibility of generating navigation instructions that drivers can intuitively understand. In addition, we confirmed that our method is able to generate navigation quickly. This research is expected to mitigate risky driving caused by navigation systems while driving.

前の研究 次の研究