Inconsistency-Based Federated Active Learning
Authors: Chen-Chen Zong, Tong Jin, Sheng-Jun Huang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets demonstrate that IFAL outperforms state-of-the-art methods. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106, China EMAIL |
| Pseudocode | No | The paper describes the methodology in narrative text and figures, but does not include a clearly labeled pseudocode or algorithm block with structured steps. |
| Open Source Code | No | The paper does not contain any explicit statement about open-sourcing the code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The experiments are conducted on three benchmark datasets: CIFAR-10 [Krizhevsky et al., 2009], CIFAR-100 [Krizhevsky et al., 2009], and Tiny-Imagenet [Le and Yang, 2015]. |
| Dataset Splits | Yes | CIFAR-10 and CIFAR-100 each contain 60k color images of size 32 32, divided into 50k training images and 10k test images, with 10 and 100 classes, respectively. Tiny-Imagenet is a subset of Imagenet [Deng et al., 2009], consisting of 200 classes, with 500 training images and 50 validation images per class. ... Initially, 5% of the examples are randomly selected to form DL k for each client k. In each subsequent AL round, 5% of the examples are queried. |
| Hardware Specification | Yes | We repeat all experiments three times on GeForce RTX 3090 GPUs and record the average results for three random seeds. |
| Software Dependencies | No | The paper mentions using the standard federated learning (FL) framework Fed Avg and the SGD optimizer, but does not provide specific version numbers for any software dependencies like deep learning frameworks or programming languages. |
| Experiment Setup | Yes | The total number of communication rounds T is set to 100, with 5 local update epochs per round. The federated active learning (FAL) process involves 6 cycles for CIFAR-10/100 and 3 cycles for Tiny Imagenet. Initially, 5% of the examples are randomly selected to form DL k for each client k. In each subsequent AL round, 5% of the examples are queried. A 4-layer CNN is used as the base model, trained with the SGD optimizer (momentum 0.9, weight decay 1e-5, batch size 64). The learning rate (lr) is set to 0.01 and reduced by a factor of 10 after T > 75. The hyper-parameter K in reverse K-nearest neighbor (r KNN) is generally set to 250. The local distillation model is trained similarly for 5 100 epochs, with early stopping applied to reduce training time, and the lr is reduced after the (5 75)-th epoch. For the parameters in Equation (5), ̵ is 0.9, and T is 4, which are common settings in knowledge distillation tasks. |