Channel Normalization for Time Series Channel Identification
Authors: Seunghan Lee, Taeyoung Park, Kibok Lee
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of CN and its variants by applying them to various TS models, achieving significant performance gains for both non-CID and CID models. In addition, we analyze the success of our approach from an information theory perspective. ... We provide extensive experiments on various backbones including TSFMs, achieving significant improvements for both CID and non-CID models as shown in Figure 2(a). |
| Researcher Affiliation | Academia | 1Department of Statistics and Data Science, Yonsei University 2KRAFTON; work done while at Yonsei University. Correspondence to: Taeyoung Park <EMAIL>, Kibok Lee <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Channel Normalization (CN) ... Algorithm 2 Adaptive Channel Normalization (ACN) ... Algorithm 3 Prototypical Channel Normalization (PCN) |
| Open Source Code | Yes | Code is available at https://github.com/seunghan96/CN. |
| Open Datasets | Yes | For the experiments, we use 12 datasets: four ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2) (Zhou et al., 2021), four PEMS datasets (PEMS03, PEMS04, PEMS07, PEMS08) (Chen et al., 2001), Exchange, Weather, ECL (Wu et al., 2021), and Solar-Energy (Solar) (Lai et al., 2018). Details of the dataset statistics are provided in Appendix A.1. |
| Dataset Splits | Yes | We follow the same data processing steps and train-validation-test split protocol as used in S-Mamba (Wang et al., 2025), maintaining a chronological order in the separation of training, validation, and test sets, using a 6:2:2 ratio for the Solar-Energy, ETT, and PEMS datasets, and a 7:1:2 ratio for the other datasets. |
| Hardware Specification | No | The paper does not explicitly mention specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments. |
| Software Dependencies | No | This choice is consistent with the default initialization used in PyTorch (Paszke et al., 2019) normalization layers, including Layer Normalization and Batch Normalization. This mentions 'PyTorch' but does not specify a version number or other software with their versions. |
| Experiment Setup | Yes | For all experiments involving TSFM, Uni TS (Gao et al., 2024) is trained across multiple tasks using a unified protocol. ... Supervised training: Models are trained for 5 epochs with gradient accumulation, yielding an effective batch size of 1024. The initial learning rate is set to 3.2e-2 and adjusted using a multi-step decay schedule. Self-supervised pretraining: Models are trained for 10 epochs with an effective batch size of 4096, starting with a learning rate of 6.4e-3 and utilizing a cosine decay schedule. The embedding dimension is set to 64 for the supervised version and 32 for the prompt-tuning version. |