Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning

Authors: Nisha L. Raichur, Lucas Heublein, Tobias Feigl, Alexander Rügamer, Christopher Mutschler, Felix Ott

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results conducted on the CIFAR-10, CIFAR-100, and Image Net100 datasets for image classification and images of a GNSS-based dataset for interference classification validate the efficacy of our method, showcasing its superiority over existing state-of-the-art approaches.
Researcher Affiliation Academia Nisha L. Raichur EMAIL Fraunhofer Institute for Integrated Circuits IIS, Nürnberg, Germany
Pseudocode Yes Algorithm 1 BLCL: Algorithm for Class-Incremental Learning (Python & Py Torch like code)
Open Source Code Yes Git: https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/gnss_class_incremental_learning
Open Datasets Yes Experimental results conducted on the CIFAR-10, CIFAR-100, and Image Net100 datasets for image classification and images of a GNSS-based dataset for interference classification validate the efficacy of our method, showcasing its superiority over existing state-of-the-art approaches. The CIFAR-10 (Krizhevsky & Hinton, 2009) dataset consists of 60,000 colour images of size 32 32. The CIFAR-100 (Krizhevsky & Hinton, 2009) dataset has 100 classes containing 600 images each, and hence, we train 10 classes per task with 10 tasks in total. Image Net100 is a subset of the larger Image Net dataset, containing 100 carefully selected classes with 1,300 images per class, maintaining a balanced distribution. It is commonly used for benchmarking due to its reduced computational requirements while preserving the diversity of Image Net (Deng et al., 2009).
Dataset Splits Yes We partition the dataset into a 64% training set, 16% validation set, and a 20% test set split (balanced over the classes). We train five tasks: task 1 consists of the classes 0, 1, and 2, task 2 consists of the classes 3 and 4, task 3 consists of the classes 5 and 6, task 4 consists of the classes 7 and 8, and task 5 consists of the classes 9 and 10.
Hardware Specification Yes All experiments are conducted utilizing Nvidia Tesla V100-SXM2 GPUs with 32 GB VRAM, equipped with Core Xeon CPUs and 192 GB RAM.
Software Dependencies No The paper mentions "Python & Py Torch like code" in Algorithm 1 and "torchvision.transforms" in Appendix A.1, but no specific version numbers are provided for these or any other software dependencies.
Experiment Setup Yes We use the vanilla Adam optimizer with a learning rate set to 0.1, a decay rate of 0.1, a batch size of 128, and train each task for 300 epochs.