Class Incremental Learning from First Principles: A Review
Authors: Neil Ashtekar, Jingxi Zhu, Vasant G Honavar
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this review, we take a step back and reconsider the CIL problem. We reexamine the problem definition and describe its unique challenges, contextualize existing solutions by analyzing non-continual approaches, and investigate the implications of various problem configurations. Our goal is to provide an alternative perspective to existing work on CIL and direct attention toward unexplored aspects of the problem. ... Belouadah et al. (2021) 2020 Summarizes work on CIL with empirical evaluations on image classification benchmarks ... Mai et al. (2022) 2020 Focuses on empirical evaluations in the online CIL setting over various performance metrics ... Masana et al. (2022) 2020 Reviews work on CIL in the context of image classification with evaluations across various task-splits and replay strategies ... Zhou et al. (2024) 2023 Reviews deep learning approaches to CIL with memory-aligned evaluations on image classification benchmarks ... Harun et al. (2023a) finds that several highly-cited CIL algorithms actually use more compute1 than trivially retraining on all of the data at every task! ... Masana et al. (2020) and Zhou et al. (2024) for empirical evidence on the split CIFAR-100 and Image Net-1K datasets. |
| Researcher Affiliation | Academia | Neil Ashtekar EMAIL Artificial Intelligence Research Laboratory Pennsylvania State University Jingxi Zhu EMAIL Artificial Intelligence Research Laboratory Pennsylvania State University Vasant G Honavar EMAIL Artificial Intelligence Research Laboratory Pennsylvania State University |
| Pseudocode | No | The paper describes various approaches and mechanisms for Class-Incremental Learning but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing code or links to source code repositories for the methodology described. |
| Open Datasets | Yes | In a recent analysis on the Image Net-1K dataset, Harun et al. (2023a) finds that several highly-cited CIL algorithms actually use more compute1 than trivially retraining on all of the data at every task! ... CIL benchmark the split CIFAR-100 dataset ... Masana et al. (2020) and Zhou et al. (2024) for empirical evidence on the split CIFAR-100 and Image Net-1K datasets. ... Klasson et al. (2023) provides empirical evidence of this phenomenon across various replaybased methods on the split MNIST series and CIFAR-10 datasets |
| Dataset Splits | Yes | The CIFAR-100 dataset contains 100 classes, and is often divided8 into 10 tasks, each containing 10 classes, to form the split version for CIL evaluations. ... Klasson et al. (2023) provides empirical evidence of this phenomenon across various replaybased methods on the split MNIST series and CIFAR-10 datasets with randomized class-to-task assignments. |
| Hardware Specification | No | The paper mentions general hardware types such as "edge devices" and "large GPU servers" but does not specify the exact models or configurations used for any experiments within this review paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, etc.). |
| Experiment Setup | No | This paper is a review and analysis of existing work in Class-Incremental Learning. It does not present new experimental results or detail any specific experimental setup, hyperparameters, or training configurations of its own. |