Super-class Mixup for Adjusting Training Data
- Author
- Shungo Fujii, Naoki Okamoto, Toshiki Seo, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
- Publication
- Asian Conference on Pattern Recognition, 2021
Download: PDF (English)
Mixup is one of data augmentation methods for image recognition task, which generate data by mixing two images. Mixup randomly samples two images from training data without considering the similarity of these data and classes. This random sampling generates mixed samples with low similarities, which makes a network training difficult and complicated. In this paper, we propose a mixup considering super-class. Super-class is a superordinate categorization of object classes. The proposed method tends to generate mixed samples with the almost same mixing ratio in the case of the same super-class. In contrast, given two images having different super-classes, we generate samples largely containing one image’s data. Consequently, a network can train the features between similar object classes. Furthermore, we apply the proposed method into a mutual learning framework, which would improve the network output used for mutual learning. The experimental results demonstrate that the proposed method improves the recognition accuracy on a single model training and mutual training. And, we analyze the attention maps of networks and show that the proposed method also improves the highlighted region and makes a network correctly focuses on the target object.