Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective

Authors: Zhongjian Zhang, Mengmei Zhang, Xiao Wang, Lingjuan Lyu, Bo Yan, Junping Du, Chuan Shi

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that Spattack can effectively prevent convergence and even break down defenses under a few malicious clients, raising alarms for securing FR systems. Experiment We conduct extensive experiments to answer the following research questions. RQ1: How does Spattack perform compared with existing Byzantine attacks? RQ2: Can Spattack break the defenses deployed on FR? RQ3: Can Spattack transfer to different FR systems? RQ4: How do hyperparameters impact on Spattack?
Researcher Affiliation Collaboration 1Beijing University of Posts and Telecommunications 2China Telecom Bestpay 3Beihang University 4Sony AI EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology and attack strategies in text, including mathematical formulations, but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes Spattack is evaluated on three widely used datasets, including movie recommendation datasets ML1M and ML100K (Harper and Konstan 2016), and game recommendation dataset Steam (Cheuque, Guzm an, and Parra 2019).
Dataset Splits Yes The test set is divided with the leave-one-out method, where the latest interaction of a user is left as the test set and the remaining interactions as the training set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using "Fed MF (Rong et al. 2022) and the SOTA Fed GNN (Wu et al. 2021) are selected as evaluation models" but does not specify version numbers for these or any other software libraries or frameworks used for implementation.
Experiment Setup No The paper mentions various experimental parameters related to the attack (e.g., malicious ratios ρ, starting epochs for attacks), but it does not specify concrete hyperparameters for the recommender models themselves (e.g., learning rate value, batch size, number of training epochs, specific optimizer settings). The text states "More dataset and reproducibility details are in the Appendix," implying these details are not in the main text.