Universal Backdoor Defense via Label Consistency in Vertical Federated Learning

Authors: Peng Chen, Haolong Xiang, Xin Du, Xiaolong Xu, Xuhao Jiang, Zhihui Lu, Jirui Yang, Qiang Duan, Wanchun Dou

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across multiple datasets rigorously demonstrate the efficacy of the UBD framework, achieving state-of-the-art performance against diverse backdoor attack types in VFL, including both dirty-label and clean-label variants. Section 5 is titled "Experiments" and contains subsections like "Experiment Setup," "Evaluation Metrics," "Attack Baselines," "Defense Baselines," "Training Details," and "Experimental Results," along with tables and figures presenting empirical data.
Researcher Affiliation Collaboration The authors are affiliated with: 1Nanjing University of Information Science and Technology (Academic) 2Zhejiang University (Academic) 3Huawei (Industry) 4Fudan University (Academic) 5Pennsylvania State University (Academic) 6Nanjing University (Academic). The presence of Huawei indicates an industry affiliation alongside several academic institutions, making it a collaboration.
Pseudocode Yes The paper includes a clearly labeled algorithm block: "Algorithm 1 LCC backdoor detection on the VFL system."
Open Source Code No The paper does not provide explicit information about the public availability of its source code, such as a repository link or a clear statement of release.
Open Datasets Yes This section conducts extensive experiments on four datasets to evaluate the performance of the UBD framework: three image datasets, i.e. CIFAR-10 [Krizhevsky et al., 2009; Li et al., 2024a], Imagenette [Howard and Gugger, 2020], CINIC-10 [Darlow et al., 2018] and one image-text multimodal dataset i.e. NUS-WIDE [Chua et al., 2009].
Dataset Splits Yes The experiments randomly divide 5% from the training set as the validation set for the defender.
Hardware Specification Yes The experiments are conducted with Py Torch with two NVIDIA 3090 GPU cards.
Software Dependencies No The paper mentions using 'Py Torch' but does not specify its version number or any other software dependencies with their specific versions.
Experiment Setup Yes The hyperparameters λ1, λ2, λ3 and λ4 are set to 1, 0.1, 1, 1. We used Adam optimizer to train LP, with learning rates ranging from 0.001 to 0.1 for different datasets.