Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement

Authors: Gaurav Patel, Christopher M. Sandino, Behrooz Mahasseni, Ellen Zippi, Erdrin Azemi, Ali Moin, Juri Minxha

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate that low-rank weight disentanglement during source-model preparation enables parameter-efficient adaptation on the target side, consistently improving performance across various SFDA methods (Liang et al., 2020; Yang et al., 2021a; 2022; Ragab et al., 2023b) and time-series benchmarks (Ragab et al., 2023a;b).
Researcher Affiliation Collaboration Purdue University , Apple
Pseudocode Yes Algorithm 1: The higher-order orthogonal iteration (HOOI) algorithm. (De Lathauwer et al., 2000; Kolda & Bader, 2009) Input: Tensor A RI1 I2 IN , Truncation (R1, R2, . . . , RN), Initial guess {U(n) 0 : n = 1, 2, . . . , N} Output: Core tensor G, Factor matrices {U(n) k : n = 1, 2, . . . , N}
Open Source Code No The paper does not provide an explicit statement of code release or a link to a code repository. It only lists author contact emails.
Open Datasets Yes We utilize the Ada Time benchmarks proposed by Ragab et al. (2023a;b) to evaluate the SFDA methods: SSC (Goldberger et al., 2000), and MFD (Lessmeier et al., 2016), HHAR (Stisen et al., 2015), UCIHAR (Anguita et al., 2013), WISDM (Kwapisz et al., 2011).
Dataset Splits Yes Adaptations are conducted using 0.5%, 5%, and 100% of the total unlabeled target samples, randomly sampled in a stratified manner. Table 3 outlines the specific details of each dataset, including the number of domains, sensor channels, class categories, sample lengths, and the total sample count for both training and testing sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The backbone and classifier weights are optimized using the Adam optimizer (Kingma, 2014) with a learning rate of 1e-3. The paper mentions the Adam optimizer and references a paper from 2014, but does not specify software dependencies like specific library versions (e.g., PyTorch version, TensorFlow version, or Python version).
Experiment Setup Yes The backbone weights are optimized to adapt to the target distribution, with Adam (Kingma, 2014) used as the optimizer to learn the target-adapted weights. We experiment with a range of learning rates: {5e-4, 1e-4, 5e-5, 1e5, 5e-6, 1e-6, 5e-7, 1e-7} for each method (including the baseline) and report the best performance achieved. For all datasets, we utilize a simple 3-layer 1D-CNN backbone following (Ragab et al., 2023b)... Specifically, we set the filter sizes to 25 for SSC, 32 for MFD, 5 for HHAR, 5 for WISDM, and 5 for UCIHAR, following (Ragab et al., 2023a).