Original title: Revisiting Supervision for Continual Representation Learning
Authors: Daniel Marczak, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
In the continual learning domain, models continuously learn tasks one by one. While supervised continual learning has been the norm, recent studies have spotlighted the efficacy of self-supervised continual representation learning. The power of representations from self-supervised methods is often attributed to the multi-layer perceptron projector. However, this study takes a different approach by reevaluating the importance of supervision in continual representation learning. Contrary to the assumption that additional human-provided information might degrade the quality of representations, the research reveals that supervised models, when combined with a multi-layer perceptron head, can surpass self-supervised models in continual representation learning. This challenges the existing notions and emphasizes the significance of supervised approaches in enhancing continual learning.
Original article: https://arxiv.org/abs/2311.13321