Efficient Image-to-Image Diffusion Classifier for Adversarial Robustness
Authors: Hefei Mei, Minjing Dong, Chang Xu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct sufficient evaluations of the proposed classifier under various attacks on popular benchmarks. Extensive experiments show that our method achieves better adversarial robustness with fewer computational costs than DM-based and CNN-based methods. We perform extensive experiments to empirically demonstrate the superiority of IDC on various benchmarks. |
| Researcher Affiliation | Academia | Hefei Mei1, Minjing Dong*1, Chang Xu2 1City University of Hong Kong, China 2University of Sydney, Australia EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in detail using mathematical formulations and prose, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about providing source code, nor does it include any links to code repositories. |
| Open Datasets | Yes | We conduct experiments on the CIFAR-10, CIFAR-100 datasets (Krizhevsky, Hinton et al. 2009) and Tiny-Image Net (Deng et al. 2009). |
| Dataset Splits | No | The paper mentions evaluating "on the entire test dataset" for CIFAR-10, but it does not provide specific details on the training, validation, or test splits for any of the datasets used (CIFAR-10, CIFAR-100, Tiny-Image Net). |
| Hardware Specification | Yes | We train our classifier using the Adam optimizer with 256 batch size cross 4 Tesla V100-32GB GPUs, CUDA V10.2 in Py Torch V1.7.1 (Paszke et al. 2019). |
| Software Dependencies | Yes | We train our classifier using the Adam optimizer with 256 batch size cross 4 Tesla V100-32GB GPUs, CUDA V10.2 in Py Torch V1.7.1 (Paszke et al. 2019). |
| Experiment Setup | Yes | The diffusion timesteps are set to Ts = 4, and the learning rate is set to 0.0001. For the CIFAR-10 dataset, we train 400 epochs in total while we train 600 epochs for the CIFAR-100 dataset. The hyper-parameter α is set to 0.2. The model channels of the U-Net are set to cm = 64, and the number of Res Net blocks is set to n R = 1. For the CIFAR10 dataset, the upscale list of channels is set to u = [1, 4] while that for CIFAR-100 is set to u = [1, 4, 8]. |