機械知覚&ロボティクスグループ
中部大学

Local Image Feature Deep Learning 国際会議

Coarse-to-Fine Deep Orientation Estimator for Local Image Matching

Author
Y. Mori, T. Hirakawa, T. Yamashita, H. Fujiyoshi
Publication
Asian Conference on Pattern Recognition, 2019

Download: PDF (English)

Convolutional neural networks (CNNs) have become a mainstream method for keypoint matching in addition to image recognition, object detection, and semantic segmentation.
Learned Invariant Feature Transform (LIFT) is pioneering method based on CNN.
It performs keypoint detection, orientation estimation, and feature description in a single network.
Among these processes, the orientation estimation is needed to obtain invariance for rotation changes.
However, unlike the feature point detector and feature descriptor, the orientation estimator has not been considered important for accurate keypoint matching or been well researched even after LIFT is proposed.
In this paper, we propose a novel coarse-to-fine orientation estimator that improves matching accuracy.
First, the coarse orientation estimator estimates orientations to make the rotation error as small as possible even if large rotation changes exist between an image pair.
Second, the fine orientation estimator further improves matching accuracy with the orientation estimated by the coarse orientation estimator.
By using the proposed two-stage CNNs, we can accurately estimate orientations improving matching performance.
The experimental results with the HPatches benchmark show that our method can achieve a more accurate precision-recall curve than single CNN-based orientation estimators.

前の研究 次の研究