Self-supervised visual learning in the low-data regime: A comparative evaluation

Open Access
Authors
  • S. Konstantakos
  • J. Cani
  • I. Mademlis
  • D.I. Chalkiadaki
Publication date 01-03-2025
Journal Neurocomputing
Article number 129199
Volume | Issue number 620
Number of pages 24
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Self-Supervised Learning (SSL) is a valuable and robust training methodology for contemporary Deep Neural Networks (DNNs), enabling unsupervised pretraining on a ‘pretext task’ that does not require ground-truth labels/annotation. This allows efficient representation learning from massive amounts of unlabeled training data, which in turn leads to increased accuracy in a ‘downstream task’ by exploiting supervised transfer learning. Despite the relatively straightforward conceptualization and applicability of SSL, it is not always feasible to collect and/or to utilize very large pretraining datasets, especially when it comes to real-world application settings. In particular, in cases of specialized and domain-specific application scenarios, it may not be achievable or practical to assemble a relevant image pretraining dataset in the order of millions of instances or it could be computationally infeasible to pretrain at this scale, e.g., due to unavailability of sufficient computational resources that SSL methods typically require to produce improved visual analysis results. This situation motivates an investigation on the effectiveness of common SSL pretext tasks, when the pretraining dataset is of relatively limited/constrained size. This work briefly introduces the main families of modern visual SSL methods and, subsequently, conducts a thorough comparative experimental evaluation in the low-data regime, targeting to identify: (a) what is learnt via low-data SSL pretraining, and (b) how do different SSL categories behave in such training scenarios. Interestingly, for domain-specific downstream tasks, in-domain low-data SSL pretraining outperforms the common approach of large-scale pretraining on general datasets. Grounded on the obtained results, valuable insights are highlighted regarding the performance of each category of SSL methods, which in turn suggest straightforward future research directions in the field.
Document type Article
Language English
Published at https://doi.org/10.1016/j.neucom.2024.129199
Downloads
Permalink to this page
Back