Bad-PFL: Exploiting Backdoor Attacks against Personalized Federated Learning
Authors: Mingyuan Fan, Zhanyi Hu, Fuyi Wang, Cen Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The large-scale experiments across three benchmark datasets demonstrate the superior performance of Bad-PFL against various PFL methods, even when equipped with state-of-the-art defense mechanisms. The source codes are available in https://github.com/fmy266/Bad-PFL. 4 EMPIRICAL EVALUATION Datasets and models. We evaluate on three benchmark datasets: SVHN, CIFAR-10, and CIFAR100, using Res Net10 as the default model. Appendix B.2 examines the effectiveness of Bad-PFL across varying model sizes (Res Net18, Res Net34) and architectures (Mobile Net, Dense Net). Metrics. We report average accuracy (Acc, %) over clean samples and attack success rate (ASR, %) over triggered samples for clients personalized models on their test sets. |
| Researcher Affiliation | Academia | Mingyuan Fan1, Zhanyi Hu1, Fuyi Wang2, Cen Chen1 1East China Normal Unversity, 2RMIT University EMAIL EMAIL EMAIL EMAIL |
| Pseudocode | Yes | Algorithm 1: PFL process with Bad-PFL |
| Open Source Code | Yes | The source codes are available in https://github.com/fmy266/Bad-PFL. |
| Open Datasets | Yes | Datasets and models. We evaluate on three benchmark datasets: SVHN, CIFAR-10, and CIFAR100, using Res Net10 as the default model. |
| Dataset Splits | Yes | To simulate a non-IID setting, we use a Dirichlet distribution with a factor of 0.5 for data sampling. Each client trains their local and personalized models using SGD with a learning rate of 0.1 and a batch size of 32 for 15 steps (roughly one epoch). |
| Hardware Specification | Yes | We conduct these experiments using CIFAR-10, with the reported times averaged over 100 trials on a single RTX 4090 GPU. |
| Software Dependencies | No | The paper does not explicitly provide specific software dependencies with version numbers. It mentions evaluating on ResNet10, ResNet18, ResNet34, MobileNet, DenseNet, and Vision Transformer, implying standard machine learning frameworks but without specifying versions (e.g., PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | FL settings. Following existing studies (Zhuang et al., 2024), we set 100 clients and 1000 training rounds, with 10 clients being compromised. During each training round, 10% of clients are randomly selected. To simulate a non-IID setting, we use a Dirichlet distribution with a factor of 0.5 for data sampling. Each client trains their local and personalized models using SGD with a learning rate of 0.1 and a batch size of 32 for 15 steps (roughly one epoch). Others. All attacks use a poisoning rate α of 0.2. ... For Bad-PFL, we adpot ϵ = σ = 4 255. The compromised clients utilize the Adam optimizer with a learning rate of 0.01 to train generative network for 30 steps. |