Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Phase-driven Generalizable Representation Learning for Nonstationary Time Series Classification
Authors: Payal Mohapatra, Lixu Wang, Qi Zhu
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations on five datasets from sleep-stage classification, human activity recognition, and gesture recognition against 13 state-of-the-art baseline methods demonstrate that Ph ASER consistently outperforms the best baselines by an average of 5% and up to 11% in some cases. Additionally, the principles of Ph ASER can be broadly applied to enhance the generalizability of existing time-series representation learning models. |
| Researcher Affiliation | Academia | Payal Mohapatra EMAIL Northwestern University, Evanston, Illinois, USA Lixu Wang EMAIL Northwestern University, Evanston, Illinois, USA Qi Zhu EMAIL Northwestern University, Evanston, Illinois, USA |
| Pseudocode | No | The paper describes the methodology in prose and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/payalmohapatra/PhASER |
| Open Datasets | Yes | We conduct experiments on three common time-series applications Human Activity Recognition (HAR), Sleep-Stage Classification (SSC), and Gesture Recognition (GR). For HAR, we use 3 benchmark datasets: WISDM (Kwapisz et al., 2011) ... UCIHAR (Bulbul et al., 2018) ... HHAR (Stisen et al., 2015) ... For SSC, the dataset (Goldberger et al., 2000) ... For GR, the dataset (Lobov et al., 2018) |
| Dataset Splits | Yes | Each dataset is divided into four distinct non-overlapping cross-domain scenarios, following the approach in (Lu et al., 2023). Details are provided in Section D.1 of the Appendix. 20% of the training data is reserved for validation. ... Table 12: Target domain splits for 4 scenarios of each dataset. |
| Hardware Specification | Yes | All experiments are performed on an Ubuntu OS server equipped with NVIDIA TITAN RTX GPU cards using Py Torch framework. |
| Software Dependencies | No | The paper mentions using 'PyTorch framework' and 'scipy library' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Every experiment is carried out with 3 different seeds (2711, 2712, 2713). During model training, we use Adam optimizer (Kingma et al., 2020) with a learning rate from 1e-5 to 1e-3 and maximum number of epochs is set to 150 based on the suitability of each setting. We tune these optimization-related hyperparameters for each setting and save the best model checkpoint based on early exit based on the minimum value of the loss function achieved on the validation set. ... For all HAR and GR models we adopt c as 1 and for SSC c is 4. ... The sub-spectral feature normalization uses a group number of 3 and follows Equation 2.3 for operation. |