Dynamic Multi-Interest Graph Neural Network for Session-Based Recommendation

Authors: Mingyang Lv, Xiangfeng Liu, Yuanbo Xu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three bench-mark datasets demonstrate that our methods achieve better performance on different metrics. We conducted an evaluation of the proposed method on three real-world benchmark datasets. We conducted an ablation study on each design choice in DMI-GNN
Researcher Affiliation Academia 1MIC Lab, College of Computer Science and Technology, Jilin University, China EMAIL, EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical equations but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/MICLab-Rec/DMI-GNN
Open Datasets Yes We conducted an evaluation of the proposed method on three real-world benchmark datasets. The Tmall1 dataset, sourced from the IJCAI-15 competition... The Last FM2 dataset... The Retail Rocket3 dataset... 1https://tianchi.aliyun.com/dataset/data Detail?data Id=42 2http://ocelma.net/Music Recommendation Dataset/lastfm1K.html 3https://www.kaggle.com/retailrocket/ecommerce-dataset
Dataset Splits Yes For a fair comparison, we follow the preprocessing method proposed by SR-GNN (Wu et al. 2019). The statistics of the three datasets after preprocessing are detailed in Table 1. Table 1: # training # test # items Avg.Lens Tmall 351,268 25,898 40,727 6.69 Retail Rocket 433,643 15,132 36,968 5.43 Last FM 2,837,330 672,833 38,615 11.78
Hardware Specification Yes We conducted the experiment on a NVIDIA 3080Ti, using Py Torch version 1.11.0 + cu113.
Software Dependencies Yes We conducted the experiment on a NVIDIA 3080Ti, using Py Torch version 1.11.0 + cu113.
Experiment Setup Yes For fair comparison, we aligned our experimental settings with those of GCE-GNN. The Adam optimizer (Kingma and Ba 2015) was chosen, operating at a learning rate of 0.001. Our model was configured with an embedding size of 100 and trained within 20 epochs, processing data in batches of 100. For DMI-GNN, we tune the balance coefficient β among {0.001, 0.005, 0.01, 0.05}, U among {2, 3, 4, 5}, and searched η from 8 to 18 in 2 increments.