Generic Adversarial Attack Framework Against Vertical Federated Learning

Authors: Yimin Liu, Peng Jiang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation on diverse model architectures confirms the effectiveness of PGAC. Extensive experiments across models confirm that PGAC dominates the VFL prediction, achieving 96.33% attack success rate, and outperforming state-of-the-art attacks by up to 31.83%.
Researcher Affiliation Academia Yimin Liu1,2, Peng Jiang2, 1School of Computer Science and Technology, Beijing Institute of Technology, China 2School of Cyberspace Science and Technology, Beijing Institute of Technology, China EMAIL
Pseudocode Yes Algorithm 1 Adversarial Input Crafting
Open Source Code No The paper does not explicitly state that the source code for their methodology is made publicly available, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate PGAC on three cross-domain image classification datasets following [Gu et al., 2021]. Office-31 contains 4,652 images of 31 categories, collected from three domains: Amazon (A), DSLR (D), and Webcam (W) [Saenko et al., 2010]... Image Net-Caltech is created with Image Net (I) [Russakovsky et al., 2015] (1,000 classes) and the Caltech-256 (C) (256 classes)... Domain Net composed of six domains with 345 classes [Peng et al., 2019].
Dataset Splits No The paper mentions using training and test datasets (e.g., Di train and Di test) and feature partitioning among parties, but it does not provide specific percentages or sample counts for standard train/test/validation splits for the overall model evaluation, nor does it refer to standard predefined splits for the experimental evaluation.
Hardware Specification Yes All experiments are conducted on a workstation equipped with an Intel Core i7-10700K processor and running Ubuntu 20.04.1 LTS.
Software Dependencies No The paper mentions running Ubuntu 20.04.1 LTS but does not provide specific version numbers for other software dependencies or libraries used in the experiments.
Experiment Setup Yes The proxy generation process spans 200 epochs, with the loss weight λ1 set to 1, and λ2 linearly increasing each epoch as λ2 = t 200, where t is the current epoch. For shadow input construction, the percentile parameter ϵ, number of scaling factors m2, and mix ratio µ are set to 10, 5, and 0.6, respectively. We configure the crafting process with a maximum perturbation budget of Λ = 15 and T = 20 iterations, inspired by [Xie et al., 2018].