CAN-ST: Clustering Adaptive Normalization for Spatio-temporal OOD Learning

Authors: Min Yang, Yang An, Jinliang Deng, Xiaoyu Li, Bin Xu, Ji Zhong, Xiankai Lu, Yongshun Gong

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple datasets with diverse forecasting models demonstrate that CAN-ST consistently improves performance by over 20% on average and outperforms SOTA normalization methods.
Researcher Affiliation Collaboration Min Yang1 , Yang An1 , Jinliang Deng2,3 , Xiaoyu Li1 , Bin Xu1 , Ji Zhong4 , Xiankai Lu1 and Yongshun Gong1 1Shandong University 2HKGAI, Hong Kong University of Science and Technology 3Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology 4Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Co., Ltd
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations in Sections 4.1, 4.2, and 4.3, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code for the described methodology, nor does it provide any direct links to a code repository.
Open Datasets Yes Datasets: We conduct our experiments on four real-world datasets: Bike CHI1, Taxi CHI2, PEMS08 3 and Speed NYC 4. The details about these datasets are listed in the Table 1. ... 1https://www.divvybikes.com/system-Data 2https://data.cityofchicago.org/ 3https://pems.dot.ca.gov/ 4https://www.nyc.gov/html/dot/html/motorist/atis.shtml
Dataset Splits Yes We adopted the data partitioning strategy established in prior work [Jiang et al., 2021], which chronologically divides the data into training, validation, and testing subsets with a 6:2:2 ratio [Wang et al., 2024b].
Hardware Specification No The paper mentions runtime efficiency on a specific dataset (Taxi CHI) but does not provide any specific hardware details such as GPU/CPU models or memory specifications.
Software Dependencies No We use ADAM [Kingma, 2014] as the default optimizer across all the experiments and report the root mean squared error (RMSE) and mean absolute error (MAE) as the evaluation metrics.
Experiment Setup Yes We use ADAM [Kingma, 2014] as the default optimizer across all the experiments and report the root mean squared error (RMSE) and mean absolute error (MAE) as the evaluation metrics. ... For the Taxi CHI and Bike CHI datasets, the prediction horizons are set to {1 hour, 3 hours}, while for other datasets, the horizons are {5 minutes, 15 minutes}. Regarding the input sequence length, we follow standard protocols, fixing the input window length to 12 hours for the Taxi CHI and Bike CHI datasets and 1 hour for the remaining datasets. ... In this section, we analyze the influence of different C values on prediction accuracy in Figure 4. When C = 16, the model consistently achieves optimal performance.