Federated Recommendation with Explicitly Encoding Item Bias

Authors: Zhihao Wang, He Bai, Wenke Huang, Duantengchuan Li, Jian Wang, Bing Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three public datasets demonstrate the superiority of our method over several state-of-the-art approaches.
Researcher Affiliation Academia 1School of Computer Science, Wuhan University 2School of Journalism and Information Communication, Huazhong University of Science and Technology 3Hubei Luojia Laboratory, Wuhan, China EMAIL
Pseudocode Yes Algorithm 1: Model training in FREIB
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We evaluate our proposed method on three public datasets: Movie Lens-100K, Movie Lens-1M(Harper and Konstan 2015), and Amazon-Beauty (Ni, Li, and Mc Auley 2019).
Dataset Splits Yes The datasets are randomly split with the ratio of 8:2 into the training set and the test set, following the common setting in machine learning.
Hardware Specification Yes We fix the seed to ensure reproduction and conduct experiments on the NVIDIA 3090.
Software Dependencies No The paper mentions using the SGD optimizer but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes For a fair comparison, we set the number of participants to 5, conduct the communication epochs E = 50, and perform 10 local rounds (T = 10) for the federated setting. We adopt the linear layer as the score function. Besides, the initial embedding size is fixed at 32 for all methods, except the embedding size of item bias is set as 10. We use the SGD (Robbins and Monro 1951) optimizer with a learning rate lr = 0.001 except PFed Rec (Zhang et al. 2023), which employs a larger learning rate with the item encoder based on the scale of datasets. The weight decay is set to 1e 5 and the momentum to 0.9. The training batch size is 64. For the weight hyper-parameter, τ is set as 10 in FREIB. For standardized comparisons, we adopt NCF as the backbone in Fed Prox and Fed Proto, while the hyper-parameters for regularization and prototype learning weights in Fed Prox and Fed Proto are also set to 10. We implement the federated learning methods on different platforms by applying the Dirichlet sampling with common parameter β = {1.0, 0.5}.