SADBA: Self-Adaptive Distributed Backdoor Attack Against Federated Learning
Authors: Jun Feng, Yuzhe Lai, Hong Sun, Bocheng Ren
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate SADBA outperforms state-of-the-art methods, achieving higher or comparable backdoor performance and MA across various datasets with limited PMC. Evaluations across three image classification tasks demonstrate that SADBA enhances both model poisoning and data poisoning strategies. Our results indicate that SADBA surpasses most SOTAs in terms of attack success rate and backdoor persistence while requiring a lower PMC. Furthermore, our ablation analysis on the crucial factors influencing SADBA indicates that it maintains stability across various conditions. |
| Researcher Affiliation | Academia | 1Hubei Key Laboratory of Distributed System Security, Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology 2School of Economics, Wuhan Textile University 3School of Computer Science and Technology, Hainan University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Optimized Local Training Process |
| Open Source Code | No | The paper does not contain any explicit statement about releasing code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Our experiments are conducted on three classification datasets: MNIST, Fashion MNIST and CIFAR-10. The dataset details and the model architectures utilized are summarized in Tab. 2. |
| Dataset Splits | Yes | Training/Test Images: 60000/10000 (for MNIST, Fashion-MNIST), 50000/10000 (for CIFAR-10). For the three image classification tasks, we assess backdoor performance under a small scale FL setting with 100 clients and a large scale FL setting with 500 clients. In each round, we select 10 clients out of 100 clients, to submit their local model updates for aggregation. We consider a more realistic scenario where the selection of clients in each round is randomized. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Stochastic Gradient Descent (SGD) as the optimizer but does not specify any software dependencies with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | The global learning rate η is set to 0.1 for all tasks. We utilize Stochastic Gradient Descent (SGD) as the optimizer. During training, each client trains for E epochs with a specific local learning rate lr and a batch size of 64. In each round, we select 10 clients out of 100 clients, to submit their local model updates for aggregation. |