CATCH: Channel-Aware Multivariate Time Series Anomaly Detection via Frequency Patching

Authors: Xingjian Wu, Xiangfei Qiu, Zhengyu Li, Yihang Wang, Jilin Hu, Chenjuan Guo, Hui Xiong, Bin Yang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 12 real-world datasets and 12 synthetic datasets demonstrate that CATCH achieves state-of-the-art performance. We make our code and datasets available at https://github.com/decisionintelligence/CATCH.
Researcher Affiliation Academia 1East China Normal University 2The Hong Kong University of Science and Technology (Guangzhou) EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Bi-level Gradient Descent Optimization Algorithm 2 Calculation of freq-score
Open Source Code Yes We make our code and datasets available at https://github.com/decisionintelligence/CATCH.
Open Datasets Yes We make our code and datasets available at https://github.com/decisionintelligence/CATCH. We conduct experiments using 12 real-world datasets and 12 synthetic datasets (TODS datasets) to assess the performance of CATCH, more details of the benchmark datasets are included in Appendix A.1. The synthetic datasets are generated using the method reported in (Lai et al., 2021).
Dataset Splits Yes Dataset Domain Dim AR (%) Avg Total Length Avg Test Length Description ...TODS Synthetic 5 6.35 20,000 5,000 Including 6 anomaly types: global, contextual, shapelet, seasonal, trend, and mix anomalies We use the provided source code (Lai et al., 2021) without alterations as demonstrated below, except for adjusting the length parameter to generate a longer time series, to ensure a fair comparison. ...train_data = Multivariate Data Generator(dim=DIM_NUM, stream_length=20000,... ...test_data = Multivariate Data Generator(dim=DIM_NUM, stream_length=5000,...
Hardware Specification Yes All experiments are conducted using Py Torch (Paszke et al., 2019) in Python 3.8 and execute on an NVIDIA Tesla-A800 GPU.
Software Dependencies Yes All experiments are conducted using Py Torch (Paszke et al., 2019) in Python 3.8 and execute on an NVIDIA Tesla-A800 GPU. We employ the ADAM optimizer during training.
Experiment Setup Yes We employ the ADAM optimizer during training. Initially, the batch size is set to 32, with the option to reduce it by half (to a minimum of 8) in case of an Out-Of-Memory (OOM) situation. We do not use the Drop Last operation during testing. For our primary evaluation, the window size is usually set as 96 or 192. Figure 4c and Figure 4d present that our model is stable to the Traing patch size and Testing patch size respectively over extensive datasets. For example, our model performs better when the traing patch size is 8 for MSL, the testing patch size is 32 for CICIDS. We weightsum these four optimization objectives: L = Rec Losstime + ̸1 Rec Lossfreq + ̸2 Clustering Loss + ̸3 Regular Loss, (16) where ̸1, ̸2, and ̸3 are empircal coefficients. Anomaly Score = time-score + ̸score freq-score (17)