Adaptive Decision Boundary for Few-Shot Class-Incremental Learning
Authors: Linhao Li, Yongzhang Tan, Siyuan Yang, Hao Cheng, Yongfeng Dong, Liang Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmarks, namely CIFAR100, mini Image Net, and CUB200, demonstrate that incorporating our ADBS method with existing FSCIL techniques significantly improves performance, achieving overall state-of-the-art results. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence and Data Science, Hebei University of Technology, China 2College of Computing and Data Science, Nanyang Technological University, Singapore |
| Pseudocode | No | The paper describes the methodology in narrative text and mathematical equations, but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code https://github.com/Yongzhang-Tan/ADBS |
| Open Datasets | Yes | Following the benchmark setting (Tao et al. 2020), we evaluate the effectiveness of our proposed ADBS method on three datasets, i.e., CIFAR100 (Krizhevsky 2009), mini Image Net (Deng et al. 2009), and Caltech-UCSD Birds-200-2011 (CUB200) (Wah et al. 2011). |
| Dataset Splits | Yes | Each subsequent session, referred to as an incremental session, adopts an N-way, K-shot setting, which includes N classes, each with K samples. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using official codes released by authors for comparison, but does not specify software dependencies with version numbers (e.g., Python, PyTorch versions). |
| Experiment Setup | No | More details about datasets and experimental settings are included in the supplementary materials. |