TimePoint: Accelerated Time Series Alignment via Self-Supervised Keypoint and Descriptor Learning

Authors: Ron Shapira Weber, Shahar Benishay, Andrey Lavrinenko, Shahaf E. Finder, Oren Freifeld

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Time Point consistently achieves faster and more accurate alignments than standard DTW, making it a scalable solution for time-series analysis. Our code is available at https://github.com/ BGU-CS-VIL/Time Point.
Researcher Affiliation Academia 1Department of Computer Science, Ben Gurion University of the Negev (BGU). 2Data Science Research Center, BGU. 3School of Brain sciences and Cognition, BGU. Correspondence to: Ron Shapira Weber <EMAIL>.
Pseudocode No The paper describes the Time Point architecture and loss functions in sections 4 and 4.4, and the data generation process in section 3, but does not present a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our code is available at https://github.com/ BGU-CS-VIL/Time Point.
Open Datasets Yes We evaluate its generalization to real-world data using the UCR Time Series Archive (Dau et al., 2019).
Dataset Splits Yes We use the original train-test splits provided by the archive.
Hardware Specification Yes Training is performed on a single NVIDIA RTX6000 GPU with 48 GB of memory.
Software Dependencies No Our model, implemented in Py Torch, has a total of 200K trainable parameters. We have adopted the 2D WT-Conv layer from the official implementation (Finder et al., 2024) to 1D inputs.
Experiment Setup Yes Training is performed on a single NVIDIA RTX6000 GPU with 48 GB of memory. The model converges within approximately 100,000 iterations and 20 hours, with a batch size of 512, and the Adam W optimizer (Loshchilov, 2017) with a learning rate of 1 10 4 with cosine learning rate scheduler. The encoder consists of 4 layers with a number of kernels = [128, 128, 256, 256].