Investigating Pattern Neurons in Urban Time Series Forecasting

Authors: Chengxin Wang, Yiran Zhao, shaofeng cai, Gary Tan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate that PN-Train considerably improves forecasting accuracy for low-frequency events while maintaining high performance for high-frequency events. Extensive experiments demonstrate that PN-Train significantly improves the forecasting accuracy of state-of-the-art methods across real-world datasets.
Researcher Affiliation Academia Chengxin Wang Yiran Zhao Shaofeng Cai Gary Tan National University of Singapore EMAIL
Pseudocode Yes Algorithm 1: Pattern Neuron Guided Training Method
Open Source Code Yes The code is available at https://github.com/cwang-nus/PN-Train.
Open Datasets Yes We perform experiments on two real-world datasets from two urban scenarios: Metro Traffic (Hogue, 2019) and Pedestrian (Fang et al., 2024). Detailed dataset statistics are provided in Appendix A.1.
Dataset Splits Yes We split the dataset chronologically into training, validation, and test sets in a 6:2:2 ratio.
Hardware Specification Yes All experiments are conducted using Py Torch (Paszke et al., 2019) on a single NVIDIA A100 80GB GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019)' and 'Adam W optimizer (Loshchilov & Hutter, 2019)' but does not provide specific version numbers for these software components.
Experiment Setup Yes The look-back window L and forecasting horizon H are both set to 12. The selective ratio ϵ is 0.5, with a pattern neuron detection sample length B of 30 and a fine-tuning sample length R of 10. We split the dataset chronologically into training, validation, and test sets in a 6:2:2 ratio. During training, the UTSM is optimized using the Adam W optimizer (Loshchilov & Hutter, 2019) with a learning rate α1 of 0.001. Early stopping is applied with a patience of 20 epochs, and the maximum number of epochs is set to 300. For pattern neuron optimization, the UTSM is fine-tuned using the same optimizer with a learning rate α2 of 0.002 for one epoch. The batch size was 32.