Augmenting Sequential Recommendation with Balanced Relevance and Diversity

Authors: Yizhou Dang, Jiahui Zhang, Yuting Liu, Enneng Yang, Yuliang Liang, Guibing Guo, Jianzhe Zhao, Xingwei Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the effectiveness of BASRec. The average improvement is up to 72.0% on GRU4Rec, 33.8% on SASRec, and 68.5% on FMLP-Rec.
Researcher Affiliation Academia 1 Software College, Northeastern University, China 2 School of Computer Science and Engineering, Northeastern University, China EMAIL EMAIL, EMAIL
Pseudocode No The paper describes methods and processes using mathematical formulations and textual explanations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/King Gugu/BASRec
Open Datasets Yes We adopt four widely-used public datasets: Beauty, Sports, and Home are obtained from Amazon (Mc Auley, Pandey, and Leskovec 2015) with user reviews of products. Yelp2 is a business dataset. We use the transaction records after January 1st, 2019.
Dataset Splits Yes We adopt the leave-one-out strategy to partition each user s item sequence into training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify version numbers for any programming languages, libraries, or other software dependencies.
Experiment Setup Yes We set the embedding size to 64 and the batch size to 256. The maximum sequence length is set to 50. ... We use the Adam (Kingma and Ba 2014) optimizer with the learning rate 0.001, β1 = 0.9, β2 = 0.999. For BASRec, we tune the α, a, b in the range of {0.2, 0.3, 0.4, 0.5, 0.6}, {0.1, 0.2, 0.3}, {0.6, 0.7, 0.8}, respectively.