Hierarchical Classification Auxiliary Network for Time Series Forecasting
Authors: Yanru Sun, Zongxia Xie, Dongyue Chen, Emadeldeen Eldele, Qinghua Hu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments integrating HCAN with state-of-the-art forecasting models demonstrate substantial improvements over baselines on several real-world datasets. Code https://github.com/syr GitHub/HCAN |
| Researcher Affiliation | Academia | Yanru Sun1, Zongxia Xie1*, Dongyue Chen1, Emadeldeen Eldele2,3, Qinghua Hu1 1 Tianjin Key Lab of Machine Learning, College of Intelligence and Computing, Tianjin University, China 2 Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 3 Institute for Info Comm Research, Agency for Science, Technology and Research, Singapore EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in prose and mathematical formulations but does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code https://github.com/syr Git Hub/HCAN |
| Open Datasets | Yes | We ran our experiments on ten publicly available real-world multivariate time series datasets, namely: ETT, Exchange-Rate, Weather, ILI, Electricity, Traffic, and Solar Wind. |
| Dataset Splits | Yes | We followed the standard protocol in the data preprocessing, where we split all datasets into training, validation, and testing in chronological order by a ratio of 6:2:2 for the ETT dataset and 7:1:2 for the other datasets (Zeng et al. 2023). |
| Hardware Specification | Yes | HCAN was implemented by Py Torch (Paszke et al. 2019) and trained on a single NVIDIA RTX 3090 24GB GPU. |
| Software Dependencies | No | The paper mentions PyTorch and ADAM optimizer but does not provide specific version numbers for any software libraries or dependencies. |
| Experiment Setup | Yes | Following previous works (Nie et al. 2022; Zeng et al. 2023), we used ADAM (Kingma and Ba 2014) as the default optimizer across all the experiments and reported the MSE and mean absolute error (MAE) as the evaluation metrics. A lower MSE/MAE value indicates a better performance. Detailed results for MSE/MAE are provided in the Appendix. We conducted the experiment for the same number of epochs as the baseline and the initial learning rate is chosen from {5e-3, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5} through a grid search for different datasets. β was chosen from {1, 0.1, 0.01} and γ was chosen from {1, 0.1, 0.01} via grid search to obtain the best results. For HCAN parameters, we set Kc = 2 and Kf = 4. All the experiments were repeated five times with fixed random seeds |