Learning Time-Series Representations by Hierarchical Uniformity-Tolerance Latent Balancing

Authors: Amin Jalali, Milad Soltany, Michael Greenspan, Ali Etemad

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on a wide range of tasks, namely 128 UCR and 30 UAE datasets for univariate and multivariate classification, as well as Yahoo and KPI datasets for anomaly detection. The results demonstrate that Time HUT outperforms prior methods by considerable margins on classification, while obtaining competitive results for anomaly detection. Finally, detailed sensitivity and ablation studies are performed to evaluate different components and hyperparameters of our method.
Researcher Affiliation Academia Amin Jalali , Milad Soltany , Michael Greenspan, Ali Etemad EMAIL Queen s University, Canada
Pseudocode Yes Algorithm 1 provides Py Torch-like pseudo-code that describes the proposed Time HUT model.
Open Source Code Yes We have released our code implementation at https://github.com/aminjalali-research/Time HUT to contribute to the area and enable fast and accurate reproducibility.
Open Datasets Yes For classification, we use the standard UCR 128 univariate dataset (Dau et al., 2019) and UEA 30 multivariate dataset (Bagnall et al., 2018). ... In addition, we utilize the commonly used Yahoo (Laptev et al., 2015) and KPI (Ren et al., 2019) datasets for anomaly detection.
Dataset Splits Yes For classification, we use the standard UCR 128 univariate dataset (Dau et al., 2019) and UEA 30 multivariate dataset (Bagnall et al., 2018). ... In the normal setting, each dataset is divided into two halves based on the time order, with one half used for training and the other for evaluation.
Hardware Specification Yes We train our method using Py Torch 1.10 on 4 NVIDIA GeForce RTX 3090 GPUs.
Software Dependencies Yes We train our method using Py Torch 1.10 on 4 NVIDIA GeForce RTX 3090 GPUs.
Experiment Setup Yes Implementation details. We use Adam optimizer with a learning rate of 1e-3. The batch size is set to 8, with the number of epochs determined by the dataset size: 200 epochs for datasets smaller than 100,000, and 600 epochs for larger datasets. The representation dimension is fixed at 320. During training, we segment large time-series sequences into 3,000 timestamps, following (Yue et al., 2022; Lee et al., 2024). For the details of all the hyperparameters for all the datasets used in our study, please see Appendix C.