DualCast: A Model to Disentangle Aperiodic Events from Traffic Series

Authors: Xinyu Su, Feng Liu, Yanchuan Chang, Egemen Tanin, Majid Sarvi, Jianzhong Qi

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on both freeway and urban traffic datasets, integrating Dual Cast with three self-attentionbased models GMAN, STTN, and PDFormer. The results show that: (i) Dual Cast consistently reduces the forecasting errors for these models, with stronger improvements at times with more complex environment contexts and by up to 9.6% in terms of RMSE; (ii) Dual Cast also outperforms the SOTA model consistently and by up to 2.6%. The section "5 Experiments" further details the "Experimental Setup", "Datasets", and presents "Overall Results" with performance metrics like RMSE and MAE, along with an "Ablation Study".
Researcher Affiliation Academia Xinyu Su , Feng Liu , Yanchuan Chang , Egemen Tanin , Majid Sarvi and Jianzhong Qi The University of Melbourne {suxs3@student., feng.liu1@, yanchuan.chang@, etanin@, majid.sarvi@, jianzhong.qi@}unimelb.edu.au
Pseudocode No The paper describes the Dual Cast framework, its dual-branch structure, optimisation, and cross-time attention module using text, equations, and diagrams (e.g., Figure 2 and Figure 4). However, there are no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available at https://github.com/suzy0223/Dual Cast.
Open Datasets Yes Datasets. We use two freeway traffic datasets and an urban traffic dataset: PEMS03 and PEMS08 [Pe MS, 2001] contain traffic flow data collected by 358 and 170 sensors on freeways in California; Melbourne [Su et al., 2024b] contains traffic flow data collected by 182 sensors in the City of Melbourne, Australia. All datasets used in this study are publicly available and do not contain any personally identifiable information.
Dataset Splits Yes We split each dataset into training, validation, and testing sets by 7:1:2 along the time axis.
Hardware Specification Yes All experiments are run on an NVIDIA Tesla A100 GPU with 80 GB RAM.
Software Dependencies No We use the released code of the competitors, except for STTN which is implemented from Libcity [Wang et al., 2021]. We implement Dual Cast with the self-attention-based models following their source code, using Py Torch. The specific version numbers for PyTorch or Libcity are not provided.
Experiment Setup Yes We train the models using Adam with a learning rate starting at 0.001, each in 100 epochs. For the models using Dual Cast, we use grid search on the validation sets to tune the hyper-parameters α, β, and γ. Table 4 (Appendix C.1 [Su et al., 2024a]) lists these hyper-parameter values.