Integrating Personalized Spatio-Temporal Clustering for Next POI Recommendation

Authors: Chao Song, Zheng Ren, Li Lu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on multiple datasets show that with the help of personalized spatio-temporal clustering, the proposed i PCM is superior to existing methods in various evaluation metrics. We conduct extensive experiments on four real-world LBSN datasets, and the experimental results reveal that with the help of personalized spatio-temporal clustering, i PCM consistently outperforms the state-of-the-art POI recommendation methods.
Researcher Affiliation Academia Chao Song, Zheng Ren, Li Lu School of Computer Science and Engineering, University of Electronic Science and Technology of China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the model architecture and procedures in detail using natural language and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes We develop i PCM2 based on Py Torch framework with Python 3.9. 2https://github.com/songchaocn/i PCM
Open Datasets Yes Experiments were conducted on four public datasets of Foursquare collected from location-based service platform introduced in the Problem Formulation section, and the details are shown in Table 1. 1https://sites.google.com/site/yangdingqi/home
Dataset Splits Yes Regarding the division of the dataset, each user s check-in data is sorted by time, with the first 80% as the training set, the middle 10% as the validation set, and the last 10% as the test set.
Hardware Specification Yes We conduct experiments on hardware platform (CPU: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz 2.50GHz, RAM: 16.0 GB), and the operating system is Windows 11.
Software Dependencies Yes We develop i PCM2 based on Py Torch framework with Python 3.9.
Experiment Setup Yes The numbers of dimensions for POI embedding (poi embedding), user embedding (user embedding), region embedding (region embedding) and time embedding (time embedding) are 128, 128, 64 and 64, respectively. The number of encoder layers in Transformer module (denoted by encoder layers) is 2. The dimensions of the feedforward network in the Transformer encoder layer (denoted by encoder hidden) is 1024. The number of attention heads in the multi-head attention module (encoder head) is 2. The Adam optimizer is used, with the initial learning rate (lr) set to 1e-3. The batch size (batch size) is set to 100, and the model is trained for 100 epochs, with the bestperforming epoch on the validation set metrics being used to calculate the test set metrics.