Multi-Resolution Decomposable Diffusion Model for Non-Stationary Time Series Anomaly Detection
Authors: Guojin Zhong, pan wang, Jin Yuan, Zhiyong Li, Long Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted across five real-world datasets demonstrate that our proposed MODEM achieves state-of-the-art performance and can be generalized to other time series tasks. |
| Researcher Affiliation | Academia | 1Hunan University 2Hong Kong University of Science and Technology EMAIL EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical derivations and textual explanations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement or link confirming the release of source code for the described methodology. Statements such as 'We release our code...' or direct repository links are absent. |
| Open Datasets | Yes | We evaluate the performance of MODEM on five real-world datasets: SMD (Server Machine Dataset) (Su et al., 2019), PSM (Pooled Server Metrics) (Abdulaal et al., 2021), MSL (Mars Science Laboratory) (Hundman et al., 2018), SWa T (Secure Water Treatment) (Mathur & Tippenhauer, 2016), and SMAP (Soil Moisture Active Passive satellite) (Entekhabi et al., 2010). |
| Dataset Splits | Yes | The initial 5 days consist solely of normal data, while anomalies are intermittently introduced over the last 5 days. ... It consists of 13 weeks of training data and 8 weeks of testing data. ... The training sets for both datasets include unlabeled anomalies. ... For the first 7 days, only normal data were generated. During the last 4 days, 41 anomalies were injected using various attack methods. ... Train # and Test # denote the number of training and testing data, respectively. |
| Hardware Specification | Yes | We train MODEM for 100 epochs on datasets that contain only normal time series on four NVIDIA A6000 GPUs. |
| Software Dependencies | No | The proposed MODEM is implemented using the Py Torch framework and optimized using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 1e 3 and a weight decay rate of 1e 6. While PyTorch is mentioned, a specific version number is not provided, nor are specific versions for other libraries. |
| Experiment Setup | Yes | The proposed MODEM is implemented using the Py Torch framework and optimized using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 1e 3 and a weight decay rate of 1e 6. We train MODEM for 100 epochs... Our method employs a square noise schedule, uses 50 diffusion steps, and operates across 4 resolution scales. ... we set the voting threshold to 10 for all datasets. Our method employs a square noise schedule, uses 50 diffusion steps, and operates across 4 resolution scales. Further details on the hyperparameters of MODEM can be found in Appendix B.6. Table 5: Detailed hyperparameter settings of MODEM. |