Attacks against Federated Learning Defense Systems and their Mitigation

Authors: Cody Lewis, Vijay Varadharajan, Nasimul Noman

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed attacks and the mitigation strategy have been tested on a number of different experiments establishing their effectiveness in comparison with other contemporary methods. The proposed algorithm has also been made available as open source. Finally, in the appendices, we provide an induction proof for the on-off model poisoning attack, and the proof of convergence and adversarial tolerance for the new federated optimization algorithm.
Researcher Affiliation Academia Cody Lewis EMAIL Vijay Varadharajan EMAIL Nasimul Noman EMAIL Advanced Cyber Security Engineering Research Centre (ACSRC) The University of Newcastle, Newcastle, Australia
Pseudocode Yes Algorithm 1 FL algorithm mitigating the proposed attacks
Open Source Code Yes The proposed algorithm has also been made available as open source.1 1. https://github.com/codymlewis/viceroy
Open Datasets Yes We performed our experiments on three data sets, namely MNIST (Le Cun et al., 1998a), KDD cup 99 (Dua and Graff, 2017), and CIFAR-10 (Krizhevsky and Hinton, 2009).
Dataset Splits Yes MNIST is a data set of 28x28 images of hand written digits. The training set contains 60,000 samples and the testing set has 10,000, each of which is class balanced. KDD cup 99 contains a set of network traffic flows labelled with 23 classes... The training set has 345,815 samples, and the testing set 148,206... CIFAR-10 is an object recognition data set... with a training set of 50,000 samples and a testing set of 10,000 samples. For experiments using the MNIST and the KDD cup 99 data sets, we trained a Le Net300-100 network... For the MNIST and CIFAR-10 data sets, we use the Latent Dirichlet Allocation (LDA) (Hsu et al., 2019) method for data distribution.
Hardware Specification No All the endpoints and the server were simulated on a single machine.
Software Dependencies No We implemented a JAX (Bradbury et al., 2018) based simulation of a federated stochastic gradient descent system with 100 endpoints.
Experiment Setup Yes Each simulation involved training for 5000 rounds, where each round is a single epoch of training for each endpoint in the system, aggregating each of the new gradients in the server and sending the updated model to each endpoint. ... For Fools Gold, we set the confidence parameter κ = 1... For Multi-Krum, we have set the clipping value as the number of adversaries assigned during the experiment. ... For CONTRA, we set the number of expected honest endpoints to the exact number of honest endpoints and other parameters matching the original paper... For our experiments, we set the parameters {ω, η} in our algorithm to {0.525, 0.2}.