機械知覚&ロボティクスグループ
中部大学

国際会議

Faster convergence and Uncorrelated gradients in Self-Supervised Online Continual Learning

Author
Koyo Imai, Naoto Hayashi, Tsubasa Hirakawa, Takayoshi Yamashita, Tohgoroh Matsui, Hironobu Fujiyoshi
Publication
Asian Conference on Computer Vision (ACCV), 2024

Download: PDF (English)

Self-Supervised Online Continual Learning (SSOCL) focuses on continuously training neural networks from data streams. This presents a more realistic Self-Supervised Learning (SSL) problem setting, where the goal is to learn directly from real-world data streams. However, common SSL requires multiple offline training sessions with fixed IID datasets to acquire appropriate feature representations. In contrast, SSOCL involves learning from a non-IID data stream where the data distribution changes over time, and new data is added sequentially. Consequently, the challenges are insufficient learning with changing data distributions and the learning of inferior feature representations from non-IID data streams. In this study, we propose a method to address these challenges in SSOCL. The proposal method consists of a Multi-Crop Contrastive Loss, TCR Loss, and data selection based on cosine similarity to representative features. Multi-Crop Contrastive Loss and TCR Loss enable quick adaptation to changes in data distribution. Cosine similarity-based data selection ensures diverse data is stored in the replay buffer, facilitating learning from non-IID data streams. The proposed method shows superior accuracy compared to existing methods in evaluations using CIFAR-10, CIFAR-100, ImageNet-100, and CORe50.

前の研究 次の研究