Stop Diverse OOD Attacks: Knowledge Ensemble for Reliable Defense

Authors: Zhenbo Shi, Xiaoman Liu, Yuxuan Zhang, Shuchang Wang, Rui Shu, Zhidong Yu, Wei Yang, Liusheng Huang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that REE outperforms current state-of-the-art methods by a large margin in defending against OOD attacks.
Researcher Affiliation Academia 1 School of Computer Science and Technology, University of Science and Technology of China, Hefei, China 2 Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China 3 Laboratory for Advanced Computing and Intelligence Engineering, Wuxi, China 4 Hefei National Laboratory, University of Science and Technology of China, Hefei, China * EMAIL, EMAIL
Pseudocode No The paper describes methods with mathematical formulas and prose, but it does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The model implementation is based on the code from LIIF (Chen, Liu, and Wang 2021). (This refers to code used by the authors, not code released by them for this paper's methodology.)
Open Datasets Yes The CIFAR-10 and CIFAR-100 datasets consist of images across 10 and 100 categories respectively, with the training and test sets comprising 50k and 10k images. The Image Net dataset contains 1.2M training images and 50k test images (224 224), spanning a total of 1000 overall categories.
Dataset Splits Yes The CIFAR-10 and CIFAR-100 datasets consist of images across 10 and 100 categories respectively, with the training and test sets comprising 50k and 10k images. The Image Net dataset contains 1.2M training images and 50k test images (224 224), spanning a total of 1000 overall categories.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud instance types used for running the experiments.
Software Dependencies No The paper mentions that "The model implementation is based on the code from LIIF (Chen, Liu, and Wang 2021)", but it does not provide specific version numbers for any software dependencies like programming languages or libraries used for their own implementation.
Experiment Setup Yes We opt for SGD as our optimizer, setting the momentum at 0.9. The weight decay and initial learning rate, adjusted using a piecewise decay scheduler, are set to 0.0005 and 0.1, respectively. Training is conducted over 200 epochs with a batch size of 128. The perturbation magnitude, measured by the Lp norm, is represented as ϵp. On these datasets, we generate training pairs with ϵ = 8/255 and ϵ2 = 0.5, using a step size of 2/255.