機械知覚&ロボティクスグループ
中部大学

Deep Learning Image Classification Self-Supervised Learning Continual Learning 国際会議

Diverse Data Selection Considering Data Distribution for Unsupervised Continual Learning

Author
Naoto Hayashi, Tsubasa Hirakawa, Takayoshi Yamashita, and Hironobu Fujiyoshi
Publication
International Conference on Computer Vision Theory and Applications, 2024

Download: PDF (English)

In continual learning, the train data changes during the learning process, making it difficult to solve previously learned tasks as the model adapts to the new task data. Many methods have been proposed to prevent catastrophic forgetting in continual learning. To overcome this problem, Lifelong Unsupervised Mixup (LUMP) has been proposed, which is capable of learning unlabeled data that can be acquired in the real world. LUMP trains a model by self-supervised learning method, and prevents catastrophic forgetting by using a mixup of a data augmentation method and a replay buffer that stores a part of the data used to train previous tasks. However, LUMP randomly selects data to store in the replay buffer from the train data, which may bias the stored data and cause the model to specialize in some data. Therefore, we propose a method for selecting data to be stored in the replay buffer for unsupervised continuous learning method.The proposed method splits the distribution of train data into multiple clusters using the k-means clustering. Next, one piece of data is selected from each cluster. The data selected by the proposed method preserves the distribution of the original data, making it more useful for self-supervised learning.

前の研究 次の研究