Disentangled and Personalized Representation Learning for Next Point-of-Interest Recommendation

Authors: Xuan Rao, Shuo Shang, Lisi Chen, Renhe Jiang, Peng Han

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare DPRL with 16 state-of-the-art baselines. The results show that DPRL outperforms all baselines and achieves an average accuracy improvement of 10.53% over the strongest baseline. We conduct extensive experiments to evaluate DPRL and compare it with 16 state-of-the-art baselines. The results show that DPRL consistently outperforms all baselines in accuracy, and compared with the best-performing baseline, DPRL achieves an improvement of 24.34% in the best case, 10.53% on average, and 2.71% in the worst case. Moreover, we perform an ablation study to validate our model designs, analyze the effect of DPRL s model parameters, and measure DPRL s running time.
Researcher Affiliation Academia Xuan Rao1 , Shuo Shang1 , Lisi Chen1 , Renhe Jiang2 and Peng Han1 1University of Electronic Science and Technology of China, Chengdu, China 2The University of Tokyo, Tokyo, Japan EMAIL, EMAIL, EMAIL, penghan EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and descriptive text (e.g., equations 1-12) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our implementation is available on Pytorch3. 3https://github.com/kevin-xuan/DPRL
Open Datasets Yes We evaluate DPRL on two widely used realworld datasets: Gowalla1 and Foursquare2. 1http://snap.stanford.edu/data/loc-gowalla.html 2https://sites.google.com/site/yangdingqi/home
Dataset Splits Yes The first 80% of the check-ins for each user are split into equal length sequences (e.g., 20) to form the training set, while the remaining 20% are used for testing.
Hardware Specification Yes All methods are performed on the same NVIDIA A10 GPU with identical batch size.
Software Dependencies No Our implementation is available on Pytorch3. The paper mentions Pytorch but does not provide a specific version number for Pytorch or any other software dependencies.
Experiment Setup Yes We use the Adam optimizer with default betas, a learning rate of 0.01, a time slot number S of 48, and an embedding dimension d of 30 for Gowalla and 20 for Foursquare. µ is set to 1e 5 for Gowalla and 1e 6 for Foursquare, while λ is set to 0.5 for Gowalla and 0.1 for Foursquare. The region size R is 4000 for Gowalla and 3000 for Foursquare.