Time Series Alignment with Global Invariances
Authors: Titouan Vayer, Romain Tavenard, Laetitia Chapel, Rémi Flamary, Nicolas Courty, Yann Soullard
TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide an experimental study of DTW-GI (and its soft counterpart) on simulated data and real-world datasets. |
| Researcher Affiliation | Academia | Titouan Vayer EMAIL Université Lyon, INRIA, CNRS, ENS de Lyon, UCB Lyon 1, LIP, Lyon, France Romain Tavenard EMAIL Université de Rennes, CNRS, LETG, IRISA, Rennes, France Laetitia Chapel EMAIL Université Bretagne-Sud, CNRS, IRISA, Vannes, France Rémi Flamary remi.flamary@polytechnique.edu CMAP, Ecole Polytechnique, IP Paris, France Nicolas Courty EMAIL Université Bretagne-Sud, CNRS, IRISA, Vannes, France Yann Soullard EMAIL Université de Rennes, CNRS, LETG, IRISA, Rennes, France |
| Pseudocode | Yes | Algorithm 1: Block-Coordinate Descent for DTW-GI with Stiefel registration P Ipx,py; repeat Wπ Alignment matrix from DTW(x, y P ) ; M x Wπy (see Equation 17) ; U, Σ, V SVD(M) ; P UV ; until convergence; |
| Open Source Code | No | Open source code of our method will be released upon publication. |
| Open Datasets | Yes | We use the Human3.6M dataset (Ionescu et al., 2014) which consists of 3.6 million video frames of human movements recorded in a controlled indoor motion capture setting. ... The Real Sense based Trajectory Digit (RTD) dataset Alam et al. (2020) is made of digit writing trajectories. ... For this experiment, we use the covers80 dataset (Ellis & Cotton, 2007) that consists in 80 cover pairs of pop songs and we evaluate the performance in terms of recall. |
| Dataset Splits | Yes | We use the Human3.6M dataset (Ionescu et al., 2014) ... We follow the same data partition as Coskun et al. (2017): the training set has 5 subjects (S6, S7, S8, S9 and S11) and the remaining 2 subjects (S1 and S5) compose the test set. In our experiments, 1) we split the limit frames as follows: we keep the first T = 300 timestamps to calculate the coefficient ad y T , x(i) T and the transformations fi 2) we find the hyperparameter λ which gives the best prediction (w.r.t. the ℓ2 norm) for t [T , T0] (where T0 = 400) 3) the remaining times [T0, T ] are used for the test set. We set the last limit frame as T = 1100 which corresponds to predicting T T0 = 700 timestamps, that is predicting 14 seconds of motion given the initial 8 seconds. |
| Hardware Specification | No | TV gratefully acknowledges the support of the Centre Blaise Pascal s IT test platform at ENS de Lyon (Lyon, France) for Machine Learning facilities. The platform operates the SIDUS solution (Quemener & Corvellec, 2013). No specific hardware components (e.g., CPU, GPU models) are mentioned for the experiments. |
| Software Dependencies | No | Unless otherwise specified, the set F of feature space transforms is the set of affine maps whose linear part lies in the Stiefel manifold. In all our experiments, tslearn (Tavenard et al., 2020) implementation is used for baseline methods and gradient descent on the Stiefel manifold is performed using Geo Opt (Kochurov et al., 2019; Becigneul & Ganea, 2019) in conjunction with Py Torch (Paszke et al., 2019). ... Numerical computations involve numpy (Harris et al., 2020), scipy (Virtanen et al., 2020) and scikit-learn (Pedregosa et al., 2011) for the CTW implementation. Although libraries are named, specific version numbers are not provided. |
| Experiment Setup | Yes | In these experiments, the number of iterations for BCD as well as the number of gradient steps for the gradient descent optimizer are set to 5,000. The BCD algorithm used for DTW-GI is stopped as soon as it reaches a local minimum, while early stopping is used for the gradient-descent variant with a patience parameter set to 100 iterations. ... In this experiment we set γ = 0.05 for the smoothness parameter. |