Federated Few-Shot Class-Incremental Learning

Authors: Muhammad Anwar Masum, Mahardhika Pratama, Lin Liu, H Habibullah, Ryszard Kowalczyk

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experimental results show that UOPP significantly outperforms state-of-the-art (SOTA) methods on three datasets with improvements up to 76% on average accuracy and 90% on harmonic mean accuracy respectively. Our extensive analysis shows UOPP robustness in various numbers of local clients and global rounds, low communication costs, and moderate running time.
Researcher Affiliation Academia M. Anwar Ma sum , Mahardhika Pratama, Lin Liu, Habibullah Habibullah, and Ryszard Kowalczyk University of South Australia, Mawson Lakes, SA, 5095, Australia EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes A DETAILED PROCESS OF UNIFIED OPTIMIZED PROTOTYPE PROMPT (UOPP) In this section, we present the detailed algorithm of UOPP as shown in algorithm 1.
Open Source Code Yes The source code of UOPP is publicly available at https://github.com/anwarmaxsum/FFSCIL.
Open Datasets Yes Datasets: our experiment is done using three benchmarks i.e. split CIFAR100, split Mini Image Net, and split CUB200. The CIFAR100 and mini Image Net datasets contain 100 classes while CUB200 is a dataset of 200 classes.
Dataset Splits Yes For CIFAR100 and Mini Image Net, we split the dataset into 9 tasks i.e. 60 classes for the base task (t = 0), and 5 classes for each few-shot task (t > 0). We split the CUB200 dataset into 11 tasks i.e. 100 classes for the base task, and 10 classes for each few-shot task. Few shot tasks are measured in 5-shot and 1-shot settings.
Hardware Specification Yes our numerical study is executed under a single NVIDIA A100 GPU with 40 GB memory across 3 different random seeds.
Software Dependencies No The paper mentions using a 'Vi T backbone' and refers to 'ODESolver based on Runge-Kutta method' but does not provide specific version numbers for software libraries (e.g., Python, PyTorch, CUDA) as required.
Experiment Setup Yes The total global round is set to 90 (10 rounds/task) for CIFAR100 and Mini Image Net and 110 for CUB200. For all methods, the local training on each client is set with a maximum of 20 epochs, and the learning rate is set by choosing the best value from {0.001, 5.0} by grid search with 2 incremental factors. For UOPP, the rectification step M is set to 40 steps per iteration. The initial learning rate is set with the best result from 0.001 to 0.2 by a 2 or 5 incremental factor. The prompt length is set to 5.