Deep Learning, Driver’s Assistance System, Vision and Language Model, Evaluation Method
- Author
- Masaki Nambata, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi, Takehito Teraguchi, Shota Okubo, and Takuya Nanri
- Publication
- International Joint Conference on Computer Vision Theory and Applications, 2025
Download: PDF (English)
In the field of Advanced Driver Assistance Systems (ADAS), car navigation systems have become an essential part of modern driving. However, the guidance provided by existing car navigation systems is often difficult to understand, making it difficult for drivers to understand solely through voice instructions. This challenge has led to growing interest in Human-like Guidance (HLG), a task focused on delivering intuitive navigation instructions that mimic the way a passenger would guide a driver. Despite this, previous studies have used rule-based systems to generate HLG datasets, which have resulted in inflexible and low-quality due to limited textual representation. In contrast, high-quality datasets are crucial for improving model performance. In this study, we propose a method to automatically generate high-quality navigation sentences from image data using a Large Language Model with a novel prompting approach. Additionally, we introduce a Mixture of Experts (MoE) framework for data cleaning to filter out unreliable data. The resulting dataset is both expressive and consistent. Furthermore, our proposed MoE evaluation framework makes it possible to perform appropriate evaluation from multiple perspectives, even for complex tasks such as HLG.