ADBM: Adversarial Diffusion Bridge Model for Reliable Adversarial Purification

Authors: Xiao Li, Wenxuan Sun, Huanran Chen, Qiongxiu Li, Yingzhe He, Jie Shi, Xiaolin Hu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that ADBM achieved better robustness than Diff Pure under reliable adaptive attacks. In particular, ADBM achieved a 4.4% robustness gain compared with Diff Pure on average on CIFAR-10 (Krizhevsky et al., 2009), while the clean accuracies kept comparable.
Researcher Affiliation Collaboration 1Department of Computer Science and Technology, Tsinghua University 2Peking University 3Beijing Institute of Technology 4Aalborg University 5Harbin Institute of Technology, Weihai 6Huawei Technologies
Pseudocode No The paper describes the proposed method, ADBM, and its training and inference processes. However, it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block with structured steps for a method or procedure.
Open Source Code Yes Code is available at https://github.com/Lixiao THU/ADBM.
Open Datasets Yes We conducted comprehensive experiments on popular datasets, including SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and Tiny Image Net (Le & Yang, 2015), together with a large-scale dataset Image Net-100
Dataset Splits Yes Consistent with Nie et al. (2022), we conducted the adaptive attack three times on a subset of 512 randomly sampled images from the test set of CIFAR-10.
Hardware Specification Yes All experiments were run using Py Torch 1.12.1 and CUDA 11.3 on 4 NVIDIA 3090 GPUs.
Software Dependencies Yes All experiments were run using Py Torch 1.12.1 and CUDA 11.3 on 4 NVIDIA 3090 GPUs.
Experiment Setup Yes The adversarial noise was computed in the popular norm-ball setting ϵa 8/255. When computing ϵa, we used PGD with three iteration steps and a step size of 8/255... The finetuning steps were set to 30K... In each fine-tuning step, the value of T in Eq. (9) was uniformly sampled from 100 to 200. Unless otherwise specified, the forward diffusion steps were set to be 100 for SVHN and CIFAR-10 and 150 for Tiny-Image Net and Image Net-100, respectively. The reverse sampling steps were set to be five. The reverse process used a DDIM sampler. We used the Adam optimizer (Kingma & Ba, 2015) and incorporated the exponential moving average of models, with the average rate being 0.999. The batch size was set to 128 for SVHN and CIFAR-10, 112 for Tiny-Image Net, and 64 for Image Net-100 (due to memory constraints).