Cross-Domain Trajectory Association Based on Hierarchical Spatiotemporal Enhanced Attention Hypergraph

Authors: Chenlong Wu, Ze Wang, Keqing Cen, Yude Bai, Jin Hao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two well-known LBSN cross-domain datasets reveal that Star Net outperforms state-of-the-art baselines in the accuracy of user identity linkage. Experiments We evaluated the performance of Star Net in cross-domain trajectory association tasks, which involve associating cross-domain users based on their historical trajectories.
Researcher Affiliation Collaboration 1School of Software, Tiangong University, Tianjin, China 2Tianjin Key Laboratory of Autonomous Intelligence Technology and Systems, Tiangong University, Tianjin, China 3Boya Triz (Tianjin) Technology Co., Ltd., Tianjin, China
Pseudocode No The paper describes its methodology in detailed prose and mathematical formulas, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-sourcing the code, nor does it include a link to a code repository.
Open Datasets Yes We utilize real-world cross-domain LBSN datasets, Foursquare-Twitter(Zhang, Kong, and Yu 2014) and Instagram-Twitter(Riederer et al. 2016), to validate our proposed model.
Dataset Splits No The paper mentions utilizing real-world datasets but does not explicitly specify the training, validation, or test splits (e.g., percentages or counts) used for the experiments.
Hardware Specification Yes All experiments for model efficiency evaluation are conducted on a machine with Intel Xeon(R) Gold 6348@2.60GHz 24 core CPU, 100GB memory, and NVIDIA Tesla A800 (80GB) GPU.
Software Dependencies No The paper mentions employing baseline source code and fine-tuning parameters, but it does not specify any particular software dependencies with version numbers for its own implementation.
Experiment Setup Yes In our experiments, we set the embedding dimension d=128, the number of multi-head attention heads to 8, and the regularization parameter to 5e-4. For fair comparison, we set the number of epochs to 80, batch size to 16, and dropout to 0.5 for all learning methods. The learning rate was adjusted from 0.0001 to 0.01, using an early stopping mechanism with patience set to 10 to avoid overfitting.