Adversarially Robust Anomaly Detection through Spurious Negative Pair Mitigation

Authors: Hossein Mirzaei Sadeghlou, Mojtaba Nafez, Jafar Habibi, Mohammad Sabokrou, Mohammad Hossein Rohban

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate our superior performance in both clean and adversarial scenarios, with a 26.1% improvement in robust detection across various challenging benchmark datasets. ... Our experiments span various datasets, including large and real-world datasets such as Autonomous Driving Cordts et al. (2016), Image Net Deng et al. (2009), MVTec AD Bergmann et al. (2019), and ISIC Codella et al. (2019), demonstrating COBRA s practical applicability. Additionally, we conducted ablation studies to examine the impact of various COBRA components, specifically our pseudo-anomaly generation strategy and the introduced adversarial training method.
Researcher Affiliation Academia 1École Polytechnique Fédérale de Lausanne (EPFL), Switzerland 2Sharif University of Technology, Iran 3Okinawa Institute of Science and Technology, Japan EMAIL, EMAIL
Pseudocode Yes A ALGORITHM BLOCK Algorithm 1 Adversarially Robust Anomaly Detection through Spurious Negative Pair Mitigation
Open Source Code Yes The implementation of our work is available at: https://github.com/rohban-lab/COBRA.
Open Datasets Yes The high-resolution dataset comprises MVTec AD Bergmann et al. (2019), Vis A Zou et al. (2022), City Scapes Cordts et al. (2016), Image Net Deng et al. (2009), ISIC2018 Codella et al. (2019), and DAGM Wieler et al. (2007), while the low-resolution dataset includes SVHN Goodfellow et al. (2013), FMNIST Xiao et al. (2017), CIFAR10, CIFAR100, and MNIST. ... MVTec AD is under the CC-BY-NC-SA 4.0 license. ... Vis A is under the CC-BY 4.0 license. ... Cityscapes ... Their code is released under the MIT license. ... ISIC2018 is a skin disease dataset, available as task 3 of the ISIC2018 challenge. ... The ISIC dataset is available under CC-BY-NC license. ... Image Net30 Hendrycks et al. (2019a), an anomaly detection benchmark ... This dataset is freely available to researchers for non-commercial use.
Dataset Splits No In the one-class setup, considering a dataset D with M classes, experiments were conducted by treating each class in turn as the normal set and the other M 1 classes as the anomaly set. This process was repeated for each class, and performance was averaged across all classes to report the overall detection performance. ... For evaluating anomalies, we leverage the representation learned by F to compute the anomaly score, based on the similarity between test samples and normal training samples in the embedding space.
Hardware Specification Yes Our experiments were conducted using NVIDIA Ge Force RTX 3090 GPUs (24GB).
Software Dependencies No The paper mentions using a ResNet-18, LARS optimizer, PGD attack, and specific hyperparameters for training, but does not provide version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, Python version) that would be needed to replicate the experiments.
Experiment Setup Yes COBRA is trained for 100 epochs using the LARS optimizer, with a weight decay of 1 10 6 and a momentum of 0.9. To schedule the learning rate, we adopt a linear warmup for the initial 10 epochs, gradually increasing the learning rate to 1.0. Subsequently, we use a cosine decay schedule without restarts. The batch size for COBRA is set to 128. ... For adversarial training, we use PGD-10 step and ϵ = 4/255. ... The threshold λ is set at a default significance level of 0.05. ... adversarial attacks were considered, using ϵ = 4 255 for low-resolution images and ϵ = 2 255 for high-resolution images. ... For the PGD attack, we set the number of steps N to 1000, initializing the attack from 10 different random starting points for each trial to enhance the attack s effectiveness and coverage.