AutoUAD: Hyper-parameter Optimization for Unsupervised Anomaly Detection

Authors: Wei Dai, Jicong Fan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 38 datasets show the effectiveness of our methods.
Researcher Affiliation Academia Wei Dai Jicong Fan School of Data Science, The Chinese University of Hong Kong, Shenzhen EMAIL EMAIL
Pseudocode Yes Sequential and vectorized implementations for calculating EAG are shown in Algorithms 1 and 2 of Appendix A. ... Algorithm 1 Expected Anomaly Gap ... Algorithm 2 Expected Anomaly Gap Vectorized
Open Source Code No The paper does not provide an explicit statement or a direct link to the source code for the methodology described in this paper (AUTOUAD, RTM, EAG, NPD). It mentions using implementations for baselines and frameworks, but not their own code.
Open Datasets Yes Extensive experiments on 38 benchmark datasets collected by ADBench (Han et al., 2022) and DAMI (Campos et al., 2016)
Dataset Splits Yes Similar to (Shenkar & Wolf, 2022), we randomly split 50% of normal samples for training and used the rest with anomalous data for testing. All data are standardized using the training set s mean and standard deviation. The split of each dataset is the same across different UAD methods. We repeat all experiments with 5 different data splits and report the results with mean and standard deviation.
Hardware Specification Yes All experiments are implemented by Pytorch (Paszke et al., 2017) on NVIDIA RTX 3090 and AMD Ryzen Threadripper 3990X platform.
Software Dependencies No The paper mentions using Pytorch, optuna, and PyOD but does not provide specific version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes We consider the core hyper-parameters of each UAD algorithm, they are listed in Table 5. ... Unless specified we train deep UAD methods with 256 batch size, Adam optimizer, 0.001 learning rate, and 200 epochs. For AE we train 100 epochs. For DPAD we use a larger batch size of 4096.