KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks

Authors: Quan Zhou, Changhua Pei, Fei Sun, Han Jing, Zhengwei Gao, Haiming Zhang, Gaogang Xie, Dan Pei, Jianhui Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive evaluation demonstrates that KAN-AD achieves 15% higher F1 accuracy while being 50% faster than the original KAN architecture. Our code is publicly available at https://github.com/CSTCloudOps/KAN-AD. Our contributions are as follows: We performed comprehensive experiments on four publicly available datasets, verifying the effectiveness and efficiency against state-of-the-art TSAD benchmarks. In this section, we conduct comprehensive experiments primarily aimed at answering the following research questions.
Researcher Affiliation Collaboration 1Computer Network Information Center, Chinese Academy of Sciences 2University of the Chinese Academy of Sciences 3Hangzhou Institute for Advanced Study, University of the Chinese Academy of Sciences 4Institute of Computing Technology, Chinese Academy of Sciences 5ZTE 6Department of Computer Science and Technology, Tsinghua University 7School of Frontier Sciences, Nanjing University. Correspondence to: Jianhui Li <EMAIL>, Changhua Pei <EMAIL>.
Pseudocode No The paper describes the methodology using textual explanations, mathematical equations, and illustrative figures (e.g., Figure 3 showing KAN-AD process). However, it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Our code is publicly available at https://github.com/CSTCloudOps/KAN-AD.
Open Datasets Yes We evaluate KAN-AD on four publicly available UTS datasets: KPI (Competition, 2018), TODS (Lai et al., 2021), WSD (Zhang et al., 2022), and UCR (Wu & Keogh, 2021). ...We implemented MTS versions of KAN-AD in popular time series library (THUML) and evaluated them on the common SMD (Su et al., 2019), MSL (Hundman et al., 2018a), SMAP (Hundman et al., 2018b), SWa T (Mathur & Tippenhauer, 2016), and PSM (Abdulaal et al., 2021) datasets.
Dataset Splits Yes The validation strategy varies by dataset, with UCR reserving 20% of training data and other datasets employing a 4:1:5 ratio for training, validation, and testing splits.
Hardware Specification No The paper mentions 'GPU Time' and 'CPU Time' in Table 3 for efficiency comparison, but does not specify the models or types of GPUs or CPUs used for these measurements or for training the models. It lacks specific hardware details.
Software Dependencies No The paper mentions using a 'popular time series library (THUML)' for MTS versions of KAN-AD, but does not provide a specific version number for this library or any other software dependencies.
Experiment Setup Yes For each time series, we train dedicated KAN-AD models using consistent hyperparameters: batch size 1024, learning rate 0.01, and maximum 100 epochs.