Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Spectral-Aware Reservoir Computing for Fast and Accurate Time Series Classification
Authors: Shikang Liu, Chuyang Wei, Xiren Zhou, Huanhuan Chen
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the SARC on public benchmarks. As aforementioned, the Freq Res is not confined to a specific RC model. By default, we implement Freq Res based on a Bidirectional ESN (Bi ESN), which is demonstrated to be optimal across four conventional RC models in our ablation studies (Section 5.3)4. All experiments are conducted using Python 3.11 on a desktop with an Intel Core i7-14700KF CPU, and an NVIDIA Ge Force RTX 4090D GPU. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China. Correspondence to: Xiren Zhou <EMAIL>, Huanhuan Chen <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Spectral-Aware Reservoir Computing |
| Open Source Code | Yes | Summarily, our main contributions are as follows2: Code is available at https://github.com/ZOF-pt/SARC. |
| Open Datasets | Yes | We use the full UCR Time Series Archive (Dau et al., 2019) with 128 datasets spanning various applications such as activity recognition, health monitoring, and spectrum analysis. |
| Dataset Splits | Yes | Key hyperparameters are determined through a five-fold cross-validation on the training set, selecting input scaling from {0.5, 1, 2, 4}, spectral radii from {0.4, 0.6, 0.8}, regularization ζ from {0.5, 1}, and leaky rates ranging from 0 to 0.8 in 0.2 increments. The reservoir size is set to 10, the connectivity is 1, and the threshold κ is set to 100. For classification, we concatenate the derived dynamic features with the max-pooled hidden states and feed them to a default Ridge classifier. |
| Hardware Specification | Yes | All experiments are conducted using Python 3.11 on a desktop with an Intel Core i7-14700KF CPU, and an NVIDIA Ge Force RTX 4090D GPU. |
| Software Dependencies | Yes | All experiments are conducted using Python 3.11 on a desktop with an Intel Core i7-14700KF CPU, and an NVIDIA Ge Force RTX 4090D GPU. |
| Experiment Setup | Yes | Key hyperparameters are determined through a five-fold cross-validation on the training set, selecting input scaling from {0.5, 1, 2, 4}, spectral radii from {0.4, 0.6, 0.8}, regularization ζ from {0.5, 1}, and leaky rates ranging from 0 to 0.8 in 0.2 increments. The reservoir size is set to 10, the connectivity is 1, and the threshold κ is set to 100. For classification, we concatenate the derived dynamic features with the max-pooled hidden states and feed them to a default Ridge classifier. |