FIC-TSC: Learning Time Series Classification with Fisher Information Constraint
Authors: Xiwen Chen, Wenhui Zhu, Peijie Qiu, Hao Wang, Huayu Li, Zihan Li, Yalin Wang, Aristeidis Sotiras, Abolfazl Razi
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We rigorously evaluate our method on 30 UEA multivariate and 85 UCR univariate datasets. Our empirical results demonstrate the superiority of the proposed method over 14 recent state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Clemson University, USA. 2Arizona State University, USA. 3Washington University in St. Louis, USA. 4University of Arizona, USA 5University of Massachusetts Boston, USA . Correspondence to: Xiwen Chen <EMAIL>, Abolfazl Razi <EMAIL>. |
| Pseudocode | Yes | The complete algorithmic summary is provided in Algorithm 1, located in Appendix B. ... Algorithm 1 FIC-TS (Training Phase) |
| Open Source Code | No | The text mentions using 'open source code2' and provides a GitHub link, but this is for 'Times Net' and 'Patch TST' (third-party tools) that the authors implemented their method on, not for their own proposed FIC-TSC methodology. |
| Open Datasets | Yes | We rigorously evaluate our method on 30 UEA multivariate and 85 UCR univariate datasets. ... Specifically, we utilized the UEA multivariate datasets (30 datasets) (Bagnall et al., 2018) and the UCR univariate datasets (85 datasets) (Chen et al., 2015)... We select four popular publicly available datasets: TDBrain (Van Dijk et al., 2022), ADFTD (Miltiadous et al., 2023b;a), PTB-XL (Wagner et al., 2020), and Sleep EDF (Kemp et al., 2000). |
| Dataset Splits | Yes | Table 1. A summary of UEA and UCR datasets. ... Training Size Test Size ... In real-world healthcare applications, datasets are commonly partitioned by patient, such that individuals included in the training set are distinct from those in the testing set. |
| Hardware Specification | Yes | We implemented all experiments on a cluster node with NVIDIA A100 (40 GB). |
| Software Dependencies | Yes | We use AdamW optimizer (Loshchilov & Hutter, 2017) with a learning rate of 5e-3 and a weight decay of 1e-4. ... We use Pytorch Library (Paszke et al., 2019) with version of 1.13. |
| Experiment Setup | Yes | We use AdamW optimizer (Loshchilov & Hutter, 2017) with a learning rate of 5e-3 and a weight decay of 1e-4. ... We fixed the ϵ to 2 and mini-batch size to 64 for UEA datasets and 16 for UCR datasets, and (ii) Full: To fully explore the ability of our method, we perform a grid search for hyperparameters for each dataset. Specifically, we search the mini-batch size from {16, 32, 64, 128} and ϵ from {2, 4, 10, 20}. |