Time Series Domain Adaptation via Channel-Selective Representation Alignment

Authors: Nauman Ahad, Mark A. Davenport, Eva L Dyer

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on several time-series classification benchmarks and find that it consistently improves performance over existing methods. These results demonstrate the importance of adaptively selecting and screening different channels to enable more effective alignment across domains.
Researcher Affiliation Academia Nauman Ahad EMAIL School of Electrical & Computer Engineering Georgia Institute of Technology Mark A. Davenport EMAIL School of Electrical & Computer Engineering Georgia Institute of Technology Eva L. Dyer EMAIL Department of Biomedical Engineering Georgia Institute of Technology
Pseudocode No The paper describes the method using mathematical equations and textual descriptions, but does not include a dedicated pseudocode block or algorithm figure. Figure A1 in the appendix provides a visual description of the channel screening and selection method, which is a diagram, not pseudocode.
Open Source Code Yes Python implementation of our method can be found at https://github.com/nerdslab/SSSS_TSA.
Open Datasets Yes HHAR: Stisen et al. (2015) License: CC BY 4.0 https://archive.ics.uci.edu/dataset/344/heterogeneity+activity+recognition WISDM: Kwapisz et al. (2011) License: CC BY 4.0 https://archive.ics.uci.edu/dataset/507/wisdm+smartphone+and+smartwatch+activity+and+ biometrics+dataset UCIHAR: Anguita et al. (2013) License: CC BY 4.0 https://archive.ics.uci.edu/dataset/240/human+activity+recognition+using+smartphones PXECG: Wagner et al. (2020) License: CC BY 4.0 https://physionet.org/content/ptb-xl/1.0.3/
Dataset Splits Yes The publicly available datasets we report numbers on already contain train and test splits for each domain adaptation scenario(which are also used by the Adatime benchmarking suite). We use the same splits as Adatime. For results, such as those in Table A3, that require a held out validation set, we split the dedicated training set in these benchmarking datasets into a random %70/%30 split. The larger split was used for training and the smaller split was used for validation. Results were reported on the pre-designated test splits provided by these benchmarks.
Hardware Specification Yes All experiments were performed on a Single NVIDIA Quadro RTX 5000.
Software Dependencies No The paper mentions using the ADAM Optimizer and the Adatime benchmarking suite, but does not provide specific version numbers for any software libraries, programming languages, or specialized packages.
Experiment Setup Yes For all of our runs, we used a Sinkhorn regularization parameter, γ = 1e 3. We used the ADAM Optimizer with a learning rate set to 1e 3 for all experiments. All datasets were trained for 300 epochs before reporting numbers in table 1 For the HHAR and WISDM datasets, the temperature parameter τ for the softmax non linearity was set to 3. For UCIHAR this was set to 9 (as a larger number of channels were involved).