Pseudo Informative Episode Construction for Few-Shot Class-Incremental Learning
Authors: Chaofan Chen, Xiaoshan Yang, Changsheng Xu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three popular classification benchmarks (i.e., CUB200, mini Image Net, and CIFAR100) show that the proposed framework can outperform other state-of-the-art methods. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences (CASIA) 2School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS) 3Peng Cheng Laboratory, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using natural language, mathematical equations (e.g., equations 1-7), and an overall framework diagram (Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | In this work, we perform extensive experiments to evaluate our approach and compare it with the state-of-theart methods on three widely-used classification benchmarks (CUB200 (Wah et al. 2011), mini Image Net (Vinyals et al. 2016) and CIFAR100 (Krizhevsky and Hinton 2009))) for few-shot class-incremental learning (FSCIL) task. |
| Dataset Splits | Yes | Specifically, for CUB200 dataset, the first 100 classes are treated as the base classes and the remaining 100 classes are divided to construct 10 incremental sessions. In each incremental session, we sample 5 images for each of the 10 novel classes, i.e., 10-way 5-shot setting. For CIFAR100 and mini Image Net datasets, 60 classes are selected to construct the base session and the remaining 40 classes are used to form 8 incremental sessions. Each incremental session contains 5 classes with 5 images per class, i.e., 5-way 5-shot setting. ... For each base class on CIFAR100 and mini Image Net datasets, we split the corresponding training images into 280 support samples and 20 query samples. For each base class on CUB200 dataset, we select 10 and 15 images to build the support set and query set, respectively. |
| Hardware Specification | Yes | All models are deployed with Py Torch on the A100. |
| Software Dependencies | No | All models are deployed with Py Torch on the A100. (Only mentions "Py Torch" without a specific version number.) |
| Experiment Setup | Yes | The proposed PIEC is trained using SGD optimizer with momentum. In the pre-training procedure, we optimize the backbone followed by the fully connected layer for 100 epochs with the batch size of 128. The initial learning rate is 0.1, which is decayed by 0.1 at epoch 60 and 70. In the pseudo incremental learning stage, we construct two pseudo incremental sessions in each episode, i.e., G = 2. In this work, we set β to 1.0 and M to 100. The recognition model is trained for 100 epochs with the learning rate of 0.0002. We decay the learning rate by 0.5 every 20 epochs. ... We train the pseudo incremental tasks for 50 episodes and set the random seed to 3. |