Understanding Self-supervised Contrastive Learning through Supervised Objectives

Authors: Byeongchan Lee

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate the effect of balancing positive and negative pair interactions. All theoretical proofs are provided in the appendix, and our code is included in the supplementary material. ... For our experiments, we adopt Sim CLR with a temperature parameter τ = 0.5, using Image Net (Deng et al., 2009) as the dataset and Res Net-50 (He et al., 2016) as the backbone. We assess top-1 accuracy using linear evaluation, a standard protocol for evaluating self-supervised learning algorithms. ... Figure 3 shows that using data augmentation with debiased prototype representation leads to an increase in accuracy.
Researcher Affiliation Academia Byeongchan Lee EMAIL KAIST
Pseudocode No The paper includes mathematical formulations, proofs, and conceptual figures, but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or structured code-like procedures.
Open Source Code Yes All theoretical proofs are provided in the appendix, and our code is included in the supplementary material.
Open Datasets Yes For our experiments, we adopt Sim CLR with a temperature parameter τ = 0.5, using Image Net (Deng et al., 2009) as the dataset and Res Net-50 (He et al., 2016) as the backbone. ... We use Image Net as the benchmark dataset, as it is one of the most representative large-scale image datasets. ... We include results on CIFAR-10 (Krizhevsky et al., 2009).
Dataset Splits Yes The training set comprises 1,281,167 images, while the validation set comprises 50,000 images. As Image Net s test set labels are unavailable, we utilize the validation set as a test set for evaluation purposes. ... We use Image Net-LT (Image Net Long-Tailed) as a benchmark for imbalanced datasets. ... The training set consists of 115,846 images... The test set is balanced, consisting of 50,000 images, with each class having exactly 50 images. ... The training set comprises 50,000 images, while the test set comprises 10,000 images. CIFAR-10 contains 10 classes, with all images standardized to a fixed size of 32 32.
Hardware Specification Yes With 8 NVIDIA V100 GPUs, the pretraining takes about 2.5 days and 13 GB peak memory usage, the linear evaluation takes about 1.5 days and 8 GB peak memory usage, and the k-nearest neighbors takes about 40 minutes and 30 GB peak memory usage.
Software Dependencies No The paper mentions optimizers (SGD, LARS), models (ResNet-50), and techniques (Batch Normalization, ReLU activation function) but does not specify software names with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes Pretraining configuration We pretrain the encoder with a batch size of 512 for 100 epochs. We employ the SGD optimizer and set the momentum to 0.9, the learning rate to 0.1, and the weight decay rate to 0.0001. Additionally, we implement a cosine decay schedule for the learning rate, as proposed by Loshchilov & Hutter (2016); Chen et al. (2020a). ... Evaluation configuration After pretraining, we employ linear evaluation, which is the standard evaluation protocol. ... Training the linear classifier is conducted with a batch size of 4,096 for 90 epochs, utilizing the LARS optimizer (You et al., 2017).