機械知覚&ロボティクスグループ
中部大学

Deep Learning 国際会議

Understanding of Feature Representation in Convolutional Neural Networks and Vision Transformer

Author
Hiroaki Minoura, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Publication
International Conference on Computer Vision Theory and Applications, 2023

Download: PDF (English)

Understanding a feature representation (e.g., object shape and texture) of an image is an important clue for image classification tasks using deep learning models, it is important to us humans. Transformer-based architectures such as Vision Transformer (ViT) have outperformed higher accuracy than Convolutional Neural Networks (CNNs) on such tasks. To capture a feature representation, ViT tends to focus on the object shape more than the classic CNNs as shown in prior work. Subsequently, the derivative methods based on selfattention and those not based on self-attention have also been proposed. In this paper, we investigate the feature representations captured by the derivative methods of ViT in an image classification task. Specifically, we investigate the following using a publicly available ImageNet pre-trained model, i ) a feature representation of either an object’s shape or texture using the derivative methods with the SIN dataset, ii ) a classification without relying on object texture using the edge image made by the edge detection network, and iii ) the robustness of a different feature representation with a common perturbation and corrupted image. Our results indicate that the network which focused more on shapes had an effect captured feature representations more accurately in almost all the experiments.

前の研究 次の研究