GeoMamba: Towards Multi-granular POI Recommendation with Geographical State Space Model

Authors: Yifang Qin, Jiaxuan Xie, Zhiping Xiao, Ming Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results illustrate the superiority of Geo Mamba over several state-of-the-art baselines.
Researcher Affiliation Academia Yifang Qin1, Jiaxuan Xie2, Zhiping Xiao3*, Ming Zhang 1* 1State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University 2School of Earth and Space Sciences, Peking University 3Paul G. Allen School of Computer Science and Engineering, University of Washington
Pseudocode No The paper describes methods using mathematical formulations and descriptive text but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No We implement Geo Mamba and the baseline methods in Py Torch based on the open-sourced implementations or acquire from the authors. This statement refers to using existing open-source implementations for baselines or acquiring them, not explicitly stating that their own Geo Mamba code is open-source or providing a link.
Open Datasets Yes The visit data are collected on a real-world check-in platform Foursquare (Yang et al. 2014) from Singapore, Tokyo, and New York City respectively.
Dataset Splits Yes We adopt the same data split strategy from previous works (Wang et al. 2022a; Qin et al. 2023b) and split the sequences in chronological order by 80%, 10%, 10% ratio into train, valid, and test sets.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments.
Software Dependencies No We implement Geo Mamba and the baseline methods in Py Torch... While Py Torch is mentioned, no specific version number is provided.
Experiment Setup Yes The embedding sizes are fixed to 64 and models are optimized by Adam optimizer with L2 normalization weight of 0.001. For Geo Mamba, we finetune the scale number N from {2, 3, 4} and the learning rate is fixed as 0.01. The filter coefficient K and J for Ga PPO are selected from {1, 2, 3}, and we fix γφ = γψ = 1. The number of SSM layers is set as L = 2.