POI-Enhancer: An LLM-based Semantic Enhancement Framework for POI Representation Learning
Authors: Jiawei Cheng, Jingyuan Wang, Yichuan Zhang, Jiahao Ji, Yuanshao Zhu, Zhibo Zhang, Xiangyu Zhao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three real-world datasets demonstrate the effectiveness of our framework, showing significant improvements across all baseline representations. |
| Researcher Affiliation | Academia | 1SKLCCSE, School of Computer Science and Engineering, Beihang University, Beijing, China 2Department of Data Science, City University of Hong Kong, Hong Kong, China 3MIIT Key Laboratory of Data Intelligence and Management, Beihang University, Beijing, China 4School of Economics and Management, Beihang University, Beijing, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using prose, definitions, equations, and figures (e.g., Fig. 1 presents the overall architecture), but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/Applied-Machine-Learning Lab/POI-Enhancer |
| Open Datasets | Yes | We conducted experiments on three check-in datasets provided by (Yang et al. 2014): Foursquare-NY, Foursq-SG, and Foursquare-TKY, sampled from New York, Singapore, and Tokyo, respectively. |
| Dataset Splits | Yes | Then we shuffled the dataset and split it into a ratio of 2:1:7 for the test set, validation set, and training set. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions using "Llama-2-7B as the LLM backbone", which refers to a model, not hardware. |
| Software Dependencies | No | The paper mentions using "Llama-2-7B as the LLM backbone" and "Lib City (Wang et al. 2021a)" but does not provide specific version numbers for these or other software dependencies like programming languages or libraries. |
| Experiment Setup | No | The paper states: "The complete implementation details are provided in the Supplementary Material." While it discusses optimal L1 and L2 layer numbers for its internal modules, it does not provide specific experimental setup details such as learning rates, batch sizes, number of epochs, or optimizer settings in the main text. |