Time Series Representations with Hard-Coded Invariances

Authors: Thibaut Germain, Chrysoula Kosma, Laurent Oudre

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present an extended experimental evaluation illustrating the use and performance of hard-coded invariant convolutions.
Researcher Affiliation Academia 1 Universit e Paris-Saclay, Universit e Paris Cit e, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190, Gif-sur-Yvette, France. Correspondence to: Thibaut Germain <EMAIL>, Chrysoula Kosma <EMAIL>.
Pseudocode No The paper contains mathematical formulations and definitions for the proposed methods (e.g., Definition 3.1, Proposition 1) but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code and Experimental Details. The source code for this work is available on Git Hub1. 1https://github.com/sissykosm/TS-InvConv
Open Datasets Yes We consider 5 datasets from the UCR archive (Dau et al., 2019)... We consider the 26 multivariate UEA datasets (Bagnall et al., 2018)... the human activity recognition UCIHAR (Anguita et al., 2013) dataset, the Sleep-EDF dataset (Goldberger et al., 2000)... Fault-Diagnosis dataset (Lessmeier et al., 2016)... SMD (Su et al., 2019), MSL and SMAP (Hundman et al., 2018), SWa T (Mathur & Tippenhauer, 2016) and PSM (Abdulaal et al., 2021).
Dataset Splits Yes For these datasets, we follow the same preprocessing with (Eldele et al., 2021), deriving train/validation/test sets of 60 : 20 : 20 ratio... Similarly, for the five anomaly detection datasets, we split into train/validation/test sets with a 70 : 10 : 20 ratio (Xu, 2021).
Hardware Specification Yes All experiments presented in this study were conducted on an Nvidia Tesla V100 GPU, with 40 cores and 756 GB of memory.
Software Dependencies No The paper mentions using several software libraries and optimizers (e.g., Adam optimizer, Time-Series-Library, sktime Library, Aeon Library) but does not provide specific version numbers for these components.
Experiment Setup Yes All experiments presented in this study were conducted on an Nvidia Tesla V100 GPU... We utilized the Adam optimizer with a learning rate of lr = 0.001 for both classification and unsupervised anomaly detection tasks. We also adopted a linear cosine annealing learning rate scheduler... For anomaly detection and the rest methods, we utilized a learning rate scheduler of 0.5 decrease rate per epoch. We trained the models for 100 epochs... We performed early stopping during training, after 20 epochs of no improvement...