Weakly Supervised Anomaly Detection via Dual-Tailed Kernel

Authors: Walid Durani, Tobias Nitzl, Claudia Plant, Christian Böhm

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, WSAD-DT achieves state-of-the-art performance on several challenging anomaly detection benchmarks, outperforming leading ensemble-based methods such as XGBOD. ... 8. Experiments 8.1. Experimental Setup We compare WSAD-DT with state-of-the-art deep anomaly detection methods on over 20 real-world datasets from the Ad Benchmark repository (Han et al., 2022). Each dataset is split into 70% training and 30% testing, preserving the anomaly ratio via stratified sampling.
Researcher Affiliation Academia 1LMU Munich, Munich Center for Machine Learning (MCML), Munich, Germany 2LMU Munich, Munich, Germany 3Faculty of Computer Science, ds:Uni Vie, University of Vienna, Vienna, Austria 4Faculty of Computer Science, University of Vienna, Vienna, Austria.
Pseudocode Yes G. Algorithm details In Algo. 1 we describe WSAD-DT in detail.
Open Source Code No Our code is implemented in Py Torch and builds on top of the Deep OD and Py OD libraries (Zhao et al., 2019; Xu, 2023). Our anonymous code repository: Link (Anonymous).
Open Datasets Yes We compare WSAD-DT with state-of-the-art deep anomaly detection methods on over 20 real-world datasets from the Ad Benchmark repository (Han et al., 2022).
Dataset Splits Yes Each dataset is split into 70% training and 30% testing, preserving the anomaly ratio via stratified sampling.
Hardware Specification Yes All experiments were conducted on a workstation equipped with an Intel Core i7-10700K CPU (3.8 GHz) and 32 GB of RAM.
Software Dependencies No Our code is implemented in Py Torch and builds on top of the Deep OD and Py OD libraries (Zhao et al., 2019; Xu, 2023). All models are trained for 100 epochs using the Adam optimizer with a learning rate of 1e-3 and a weight decay of 1e-5. We use the standard Adam hyperparameters (β1 = 0.9, β2 = 0.999).
Experiment Setup Yes All models are trained for 100 epochs using the Adam optimizer with a learning rate of 1e-3 and a weight decay of 1e-5. We use the standard Adam hyperparameters (β1 = 0.9, β2 = 0.999). Batches of size 64 are used for each training step (Table 7). ... Table 7. Neuralnetwork and training setting of WSAD-DT: GENERAL TRAINING Batch size 64 Learning rate 1e-3 Epochs 100 OPTIMIZER Optimizer Adam Momentum β1 0.9 Momentum β2 0.999 Weight decay 1e-5