Effective and Efficient Representation Learning for Flight Trajectories
Authors: Shuo Liu, Wenbin Li, Di Yao, Jingping Bi
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that FLIGHT2VEC significantly improves performance in downstream tasks such as flight trajectory prediction, flight recognition, and anomaly detection. |
| Researcher Affiliation | Academia | 1School of Advanced Interdisciplinary Sciences, University of Chinese Academy of Sciences, China 2Institute of Computing Technology, Chinese Academy of Sciences, China 3University of Chinese Academy of Sciences, China EMAIL |
| Pseudocode | No | The paper describes the methodology using textual descriptions and figures (Figure 2, 3, 4) but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/liushuoer/FLIGHT2VEC |
| Open Datasets | Yes | We conduct extensive experiments on two real-world datasets, the Swedish Civil Air Traffic Control (SCAT) (Nilsson and Unger 2023) and Aircraft Trajectory Classification Data for Air Traffic Management(ATFMTraj) (Phisannupawong, Damanik, and Choi 2024b). |
| Dataset Splits | No | The paper does not explicitly provide specific percentages, sample counts, or detailed methodology for training, validation, or test dataset splits. It mentions evaluation metrics for different tasks but not how the data was partitioned. |
| Hardware Specification | Yes | All the experiments are conducted on the 2 NVIDIA 3090Ti. |
| Software Dependencies | No | The paper mentions using 'Adam W optimizer' and references 'Transformer configuration from (Nie et al. 2022)' but does not provide specific version numbers for any software libraries, programming languages, or tools used in the implementation. |
| Experiment Setup | Yes | Our model utilizes the Transformer configuration from (Nie et al. 2022), which includes 3 layers with a model dimension of 256, and 16 attention heads with a dropout rate of 0.2. The binomial masking probability is set at 0.4. The dimension of the representation pi is set to 256. For training, the batch size is set to 256, and the Adam W optimizer is used with a learning rate of 1 10 5 The model is pre-trained for 100 epochs with a patch length of 32. |