On the Robustness of Distributed Machine Learning Against Transfer Attacks
Authors: Sebastien Andreina, Pascal Zimmer, Ghassan Karame
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Supported by theory and extensive experimental validation using CIFAR10 and Fashion MNIST, we show that such properly distributed ML instantiations achieve across-the-board improvements in accuracy-robustness tradeoffs against state-of-the-art transfer-based attacks... We conduct an extensive robustness evaluation of our approach with state-of-the-art transfer-based attacks and find across-the-board improvements in robustness against all considered attacks. |
| Researcher Affiliation | Collaboration | 1NEC Labs Europe, Germany 2Ruhr University Bochum, Germany EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Training phase for Weak Learners |
| Open Source Code | Yes | Code https://github.com/RUBInf Sec/distributed learning robustness |
| Open Datasets | Yes | extensive experimental validation using CIFAR10 and Fashion MNIST... Due to space constraints, we include our full results and analysis for the CIFAR10 dataset (Krizhevsky 2009) in Table 6 and provide the main results for the Fashion MNIST dataset in Table 5. |
| Dataset Splits | Yes | During the fine-tuning steps, we split the node s training data into an 80%-20% ratio for the training and validation sets, respectively. ... Unless otherwise specified, each node is trained on its disjoint dataset drawn from the complete dataset following a uniform distribution. ... In the case of the Dirichlet distribution, the number of samples of a given class is distributed among the weak learners with parameter α = 0.9. |
| Hardware Specification | Yes | All our experiments were run on an Ubuntu 24.04 machine featuring two NVIDIA A40 GPUs and one NVIDIA H100 GPU, two AMD EPYC 9554 64-core Processors, and 768 GB of RAM. |
| Software Dependencies | Yes | All scenarios were executed using Python 3.9.18, CUDA 12.5, Pytorch 2.2.1, and Ray Tune 2.9.3. |
| Experiment Setup | Yes | Training parameters Training a machine learning (ML) model involves numerous decisions, including selecting the model s architecture, such as VGG or Dense Net, and determining its width and depth... Additionally, hyperparameters such as learning rate and momentum require careful tuning. ... As detailed in Table 1, we selected a set of eight different architectures (A), nine different optimizers (O), and five different schedulers (S). ... Hyperparams H: Learning rate µ [0.0001, 0.1], Momentum ν [0, 0.99], Weight decay λ [0.00001, 0.01]. ... Ray Tune is configured to run up to 100 experiments per tuning step to determine the most effective hyperparameters. ... it runs a complete training round for 200 epochs. |