OpenCon: Open-world Contrastive Learning

Authors: Yiyou Sun, Yixuan Li

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of Open Con on challenging benchmark datasets and establish competitive performance. On the Image Net dataset, Open Con significantly outperforms the current best method by 11.9% and 7.4% on novel and overall classification accuracy, respectively. Empirically, Open Con establishes strong performance on challenging benchmark datasets, outperforming existing baselines by a significant margin (Section 5).
Researcher Affiliation Academia Yiyou Sun EMAIL Yixuan Li EMAIL University of Wisconsin-Madison
Pseudocode Yes Details of Ll and Lu are in Appendix B, along with the complete pseudo-code in Algorithm 1 (Appendix).
Open Source Code Yes The code is available at https://github.com/deeplearning-wisc/opencon.
Open Datasets Yes Datasets We evaluate on standard benchmark image classification datasets CIFAR-100 (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009).
Dataset Splits Yes By default, classes are divided into 50% seen and 50% novel classes. We then select 50% of known classes as the labeled dataset, and the rest as the unlabeled set. The division is consistent with Cao et al. (2022), which allows us to compare the performance in a fair setting.
Hardware Specification Yes We run all experiments with Python 3.7 and Py Torch 1.7.1, using NVIDIA Ge Force RTX 2080Ti GPUs.
Software Dependencies Yes We run all experiments with Python 3.7 and Py Torch 1.7.1, using NVIDIA Ge Force RTX 2080Ti GPUs.
Experiment Setup Yes For CIFAR-100/Image Net-100, the model is trained for 200/120 epochs with batch-size 512 using stochastic gradient descent with momentum 0.9, and weight decay 10 4. The learning rate starts at 0.02 and decays by a factor of 10 at the 50% and the 75% training stage. The momentum for prototype updating γ is fixed at 0.9. The percentile p for OOD detection is 70%. We fix the weight for the KL-divergence regularizer to be 0.05.