Content-aware Balanced Spectrum Encoding in Masked Modeling for Time Series Classification

Authors: Yudong Han, Haocong Wang, Yupeng Hu, Yongshun Gong, Xuemeng Song, Weili Guan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on ten time-series classification datasets show that our method nearly surpasses a bunch of baselines. Meanwhile, a series of explanatory results are showcased to sufficiently demystify the behaviors of our method. Experiments In this section, we first introduce the experiment setup. Then, we present and analyze the quantitative results of our model and a bunch of baselines on time series classification. Finally, we demonstrate the effectiveness of our model through analytical experiments.
Researcher Affiliation Academia Yudong Han1,2*, Haocong Wang1*, Yupeng Hu1 , Yongshun Gong1, Xuemeng Song3, Weili Guan4 1School of Software, Shandong University, 2Beijing Institute of Technology, 3School of Computer Science and Technology, Shandong University, 4Harbin Institute of Technology (Shenzhen) EMAIL, EMAIL
Pseudocode No The paper describes the Content-aware Interaction Modulation Unit (CIM) and Spectrum Energy Rebalance Unit (SER) with mathematical formulations and textual descriptions, but it does not include a distinct pseudocode block or algorithm section.
Open Source Code No The paper does not contain any explicit statements about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes Datasets We conduct experiments on ten publicly available datasets to validate the performance of our model, including Human Activity Recognition (HAR)(Anguita et al. 2012) and nine large datasets from the UEA(Bagnall et al. 2018) and UCR archive(Dau et al. 2019), which is referred as PS, SRSCP1, MI, FM, AWR, SAD, ECG5000, FB, Uware.
Dataset Splits No The paper mentions evaluating performance using "linear evaluation and fine-tuning evaluation" and refers to using "labeled data" but does not specify the explicit percentages or counts for training, validation, and test splits within the main text. It states, "More details are provided in supplementary materials" regarding datasets.
Hardware Specification Yes All methods are conducted with NVIDIA A10 and implemented by Py Torch.
Software Dependencies No The paper mentions "implemented by Py Torch" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes For the encoder, we use the same backbone as Time MAE with the default 8 transformer layers, while the decoder is configured with 2 layers for both branches. We also adopt the same masking strategy as Time MAE. We set the batch size as 128 and choose Adam W optimizer with a learning rate of 1e-4.