On the Discrimination and Consistency for Exemplar-Free Class Incremental Learning

Authors: Tianqi Wang, Jingcai Guo, Depeng Li, Zhi Chen

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and theoretical analysis verified the superiority of DCNet. Experiments conducted across multiple benchmark datasets consistently demonstrate that our method achieves highly competitive EF-CIL performance, with an average improvement of 8.33% over the latest stateof-the-art method on Image Net-Subset task.
Researcher Affiliation Academia 1Department of COMP/LSGI, The Hong Kong Polytechnic University, Hong Kong SAR 2Department of Computer Science, University College London, United Kingdom 3School of AI and Automation, Huazhong University of Science and Technology, China 4School of Mathematics, Physics and Computing, The University of Southern Queensland, Australia EMAIL
Pseudocode Yes The algorithm for HAT and the procedure for DCNet are provided in Appendix B.
Open Source Code Yes Code is available at https://github.com/Tianqi-Wang1/DCNet.
Open Datasets Yes The CIFAR100 [Krizhevsky et al., 2009] comprises 50k training images and 10k test images... The Tiny-Image Net [Le and Yang, 2015], a subset of Image Net... The Image Net-Subset is a subset of the Image Net (ILSVRC 2012) [Russakovsky et al., 2015] with 100 categories...
Dataset Splits Yes We split these datasets equally into 10-task and 20-task sequences.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments. It mentions using a ResNet-18 model, but no specific hardware details like GPU/CPU models are provided.
Software Dependencies No The paper mentions using a Res Net-18 model and LARS training, but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes For CIFAR-100 and Tiny-Image Net, consistent with prior work [Kim et al., 2022a], we utilize LARS [You et al., 2017] training for 700 epochs with an initial learning rate of 0.1, introducing the DAC component at epoch 400. For Image Net Subset, we train for 100 epochs, incorporating DAC at epoch 50. In all the experiments, we set the τIOE = 0.05, τ (0) = 0.2 and configure the dimension of basis vector be 256.