Enhancing Sequential Recommendation with Global Diffusion

Authors: Mingxuan Luo, Yang Li, Chen Lin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various datasets demonstrate that Global Diff can enhance advanced sequential models by an average improvement of 9.67%.
Researcher Affiliation Academia 1School of Informatics,Xiamen University,Xiamen,China 2Institute of Artificial Intelligence,Xiamen University,Xiamen,China
Pseudocode No The paper describes its methodology in Section 3, outlining steps and mathematical equations, but it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code Yes Our codes are available online 5. 5https://github.com/XMUDM/Global Diff
Open Datasets Yes We conduct experiments on three publicly available datasets: Movie Lens 1M (ML-1M)2, Amazon-Beauty3, and Kuai Rec4. These datasets are commonly adopted to evaluate sequential recommendations (Yang et al. 2024; Sun et al. 2019; Kang and Mc Auley 2018). 2https://grouplens.org/datasets/movielens/ 3https://jmcauley.ucsd.edu/data/amazon/ 4https://kuairec.com/
Dataset Splits Yes To answer RQ1, we adopt a leave-one-out strategy for evaluation, the most recent interaction is used for testing, the second-to-last for validation, and the rest for training. The experimental results are summarized in Table 2. To address RQ2, we partition each sequence of length Lu into three segments, the segment of positions 1, , Lu 5 for training, the last four items at positions Lu 3, , Lu for testing, and the one item at position Lu 4 for validation.
Hardware Specification No The paper mentions the software environment (Python 3.8 and Py Torch 2.0.1) but does not specify any hardware components like CPU or GPU models used for the experiments.
Software Dependencies Yes We implement all models with Python 3.8 and Py Torch 2.0.1.
Experiment Setup Yes In the training stage, the number of diffusion steps T = 20. The scale factor ζ = 1.5, re-weighting term δ = 1, and the score weighing coefficient γ = 0.5. To optimize Global Diff, we employ the Adam optimizer, setting the batch size and learning rate to 256 and 0.001. Other hyper-parameters of the three backbone models are set to the default values as mentioned in their original paper.