Holistic Semantic Representation for Navigational Trajectory Generation
Authors: Ji Cao, Tongya Zheng, Qinghong Guo, Yu Wang, Junshu Dai, Shunyu Liu, Jie Yang, Jie Song, Mingli Song
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three real-world datasets demonstrate that HOSER outperforms state-of-the-art baselines by a significant margin. |
| Researcher Affiliation | Academia | 1Zhejiang University 2Big Graph Center, Hangzhou City University 3State Key Laboratory of Blockchain and Data Security, Zhejiang University 4Nanyang Technological Univerisity 5Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security |
| Pseudocode | No | The paper describes the methodology in text and mathematical formulas, but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code https://github.com/caoji2001/HOSER |
| Open Datasets | No | We assess the performance of HOSER and other baselines using three trajectory datasets from Beijing, Porto, and San Francisco. Each dataset is randomly split into training, validation, and test sets in a 7:1:2 ratio. Further dataset details are provided in Appendix B.1. |
| Dataset Splits | Yes | Each dataset is randomly split into training, validation, and test sets in a 7:1:2 ratio. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes the model architecture and loss functions, but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings in the main text. |