Variational Graph Auto-Encoder Driven Graph Enhancement for Sequential Recommendation
Authors: Yuwen Liu, Lianyong Qi, Xingyuan Mao, Weiming Liu, Shichao Pei, Fan Wang, Xuyun Zhang, Amin Beheshti, Xiaokang Zhou
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five public datasets demonstrate that our VGAE-GE model improves recommendation performance and robustness. |
| Researcher Affiliation | Academia | Yuwen Liu1,2 , Lianyong Qi1,2,3 , Xingyuan Mao1,2 , Weiming Liu4 , Shichao Pei5 , Fan Wang4 , Xuyun Zhang6 , Amin Beheshti6 , Xiaokang Zhou7,8 1College of Computer Science and Technology, China University of Petroleum (East China) 2Shandong Key Laboratory of Intelligent Oil and Gas Industrial Software 3State Key Laboratory for Novel Software Technology, Nanjing University 4College of Computer Science and Technology, Zhejiang University 5Department of Computer Science, University of Massachusetts Boston 6School of Computing, Macquarie University 7Faculty of Business and Data Science, Kansai University 8RIKEN Center for Advanced Intelligence Project EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology using mathematical equations and textual explanations but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available, nor does it provide any links to a code repository. |
| Open Datasets | Yes | We consider five challenging recommendation datasets: Amazon Books (i.e., Books) and Amazon Toys (i.e., Toys) collected from Amazon platform (https://www.amazon.com/), Retailrocket (i.e., Retail) collected from an e-commerce website (https://www.kaggle.com/retailrocket/ecommerce-dataset/), NYC and TKY [Yang et al., 2013] collected from Foursquare, of which the statistics are shown in Table 1. |
| Dataset Splits | Yes | We use leave-one-out strategy for evaluation and use two widely adopted ranking-based metrics to evaluate the performance of all methods, namely Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) with top 5/10/20 recommended candidates. All subsequences of Su are used as training data, i.e., {(su 1), (su 1, su 2), ..., (su 1, ..., su m 1)}. |
| Hardware Specification | Yes | Our method is implemented in Py Torch and experiments are run on an NVIDIA 4090 GPU. |
| Software Dependencies | No | The paper states, "Our method is implemented in Py Torch", but does not provide specific version numbers for PyTorch or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | The Adam optimizer is utilized for parameter inference with a learning rate of 1e-2. For the GNN component, we set the number of layers to 2. The embedding dimension is fixed at 32, with a dropout rate of 0.3 to prevent overfitting. We apply a regularization coefficient of 1e-6 to improve model generalization. The graph is constructed using a distance parameter of 3. For the parameters of the Mamba block, the SSM state expansion factor is 32, the kernel size for 1D convolution is 4, and the block expansion factor for linear projections is 2. For the Books, Retail and Toys datasets, we train the model for 150 epochs; the batch size is set to 2048; the reconstruction rate is 0.3; the maximum user sequence length is restricted to 50. For the NYC and TKY, we train the model for 300 epochs; the batch size is set to 256; the reconstruction rate is 0.6; the maximum user sequence length is restricted to 200. |