Demeaned Sparse: Efficient Anomaly Detection by Residual Estimate
Authors: Yifan Fang, Yifei Fang, Ruizhe Chen, Haote Xu, Xinghao Ding, Yue Huang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results indicate that this module can accurately and efficiently generate effective masks for reconstruction-based anomaly detection tasks, thereby enhancing the performance of anomaly detection methods and validating the effectiveness of the theoretical framework. |
| Researcher Affiliation | Academia | 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, Xiamen, China. 2School of Informatics, Xiamen University, Xiamen, China. 3School of Economics, Renmin University of China, Beijing, China. 4Institute of Artificial Intelligence, Xiamen University, Xiamen, China. |
| Pseudocode | No | The paper describes the methodology using mathematical formulations and descriptive text, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks or structured code-like procedures. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code for the methodology described, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | The experiments were performed on two widely used anomaly detection datasets: (1) MVTec-AD dataset (Bergmann et al., 2019a) serves as a benchmark for AD... (2) Vis A dataset (Zou et al., 2022) is a large industrial anomaly detection dataset... |
| Dataset Splits | Yes | MVTec-AD Dataset: ...the training set contains 3,629 anomaly-free images, and the testing set contains 1,725 images with both normal and anomaly samples. Vis A Dataset: ...the training set contains 8,721 anomaly-free images, and the testing set contains 2,100 images with both normal and anomaly samples. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA RTX 3090 GPU. |
| Software Dependencies | No | The framework were implemented in Py Torch (Paszke et al., 2019)... The paper mentions PyTorch but does not provide a specific version number for it or any other key software libraries. |
| Experiment Setup | Yes | Training epochs are set to 800, with batchsize of 2. Adam optimizer are used with an initial learning rate of 10-4, the learning rate decays with a factor of 0.2 at the 640 and 720 epoch. The regularization coefficient α is set to 10-6. The mask M is binarized at epoch 400 and fixed after binarization. |