Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Authors: Huazi Pan, Yanjun Zhang, Leo Yu Zhang, Scott Adams, Abbas Kouzani, Suiyang Khoo
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that Fed SA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates. The paper includes a dedicated section 4, titled "Performance Evaluation", which details "4.1 Experiment Setup", "4.2 Experimental Results", and "4.3 Ablation Study" including tables and figures comparing Fed SA to other methods on datasets like CIFAR10, MNIST, and Tiny Image Net. |
| Researcher Affiliation | Academia | All listed affiliations are universities: Deakin University, University of Technology Sydney, and Griffith University. The email domains also correspond to academic institutions (.edu.au). |
| Pseudocode | Yes | The paper contains a clearly labeled section titled "Algorithm 1 Malicious Model Update" which presents the pseudocode for the proposed method. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code or provide any links to a code repository. The future work section mentions "further investigation" but no current or planned code release. |
| Open Datasets | Yes | The paper uses well-known public datasets for its experiments: "CIFAR10", "MNIST", and "Tiny Image Net". These are established benchmark datasets commonly used in machine learning research. |
| Dataset Splits | No | The paper describes how training data is distributed among clients for local training (e.g., "each client receives 100 training images from each of the 10 classes" for CIFAR10, and the use of Dirichlet distribution for Non-IID scenarios). However, it does not specify the overall training, validation, and test splits for the datasets used in the experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory specifications, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, such as programming languages (e.g., Python), libraries (e.g., PyTorch, TensorFlow), or CUDA versions, that were used for the implementation. |
| Experiment Setup | Yes | The paper provides specific experimental setup details under section 4.1 "FL System Settings", including global and local learning rates, global and local batch sizes, and global and local epochs for each dataset and model combination (e.g., "For CIFAR10 dataset with Alex Net, the global learning rate is set as 0.02, the global batch size is set as 128 and the global epochs is set as 100. For the local training process, the batch size is set as 10, and the epochs is set as 5.") |