Hi-Patch: Hierarchical Patch GNN for Irregular Multivariate Time Series
Authors: Yicheng Luo, Bowen Zhang, Zhen Liu, Qianli Ma
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 8 datasets demonstrate that Hi-Patch outperforms state-of-the-art models in IMTS forecasting and classification tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China. Correspondence to: Qianli Ma <EMAIL>. |
| Pseudocode | Yes | The pseudo-code for Hi-Patch is presented in Appendix A (Algorithm 1). |
| Open Source Code | Yes | Code is available at: https://github.com/qianlima-lab/Hi-Patch. |
| Open Datasets | Yes | For the forecasting task, we follow (Zhang et al., 2024) and use four datasets: Physio Net (Silva et al., 2012), MIMIC-III (Johnson et al., 2016), Human Activity, and USHCN (Menne et al., 2015), covering the fields of healthcare, biomechanics, and climate science. For the classification task, we conduct experiments on four datasets in medical field where IMTS is most widely used, namely P19 (Reyna et al., 2020), Physio Net (Silva et al., 2012), MIMIC-III (Johnson et al., 2016) and P12 (Goldberger et al., 2000) P19 ... Available at https://physionet.org/content/challenge-2019/1.0.0/. P12 ... Available at https://physionet.org/content/challenge-2012/1.0.0/. MIMIC-III ... Available at https://physionet.org/content/mimiciii/1.4/. |
| Dataset Splits | Yes | For the forecasting task, we follow the data pre-processing method described in (Zhang et al., 2024) and randomly divide all the instances among each dataset into training, validation, and test sets according to ratios of 6:2:2. We use Mean Square Error (MSE) and Mean Absolute Error (MAE) to evaluate forecasting performance. For the classification task, we follow the method described in (Harutyunyan et al., 2019) and divide the dataset into three parts for training, validation, and testing with the ratio of 70%,15%,15% on the MIMIC-III dataset. For the remaining three datasets, we adhered to (Zhang et al., 2022) s approaches, and the ratio of training, validation, and testing set is 8:1:1. |
| Hardware Specification | Yes | All the models are experimented with using the PyTorch library on 2 GeForce RTX-3090 24G GPUs. |
| Software Dependencies | No | The paper mentions 'PyTorch library' and 'Adam optimizer' but does not provide specific version numbers for these software components. For example, 'All the models are experimented with using the PyTorch library on 2 GeForce RTX-3090 24G GPUs.' and 'We adopt the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001'. |
| Experiment Setup | Yes | We adopt the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001, stopping it when the validation loss doesn t decrease over 10 epochs. All experiments are conducted with five random seeds, and the average and standard deviation are reported. All the models are experimented with using the Py Torch library on 2 Ge Force RTX-309024G GPUs. The detailed settings of hyperparameters can be found in Appendix E. Appendix E: We search all hyperparameters in the grid for our proposed model Hi-Patch. Specifically, our model has a total of 4 hyperparameters: patch size P, dimension of node state dmodel, number of multi-head attention heads nheads, number of GAT layers L. ... Additionally, we search dmodel in {16, 32, 64, 128}, nheads in {1, 2, 4, 8} and L in {1, 2, 3}. The best hyperparameters for each dataset are reported in the code. |