Power of Diversity: Enhancing Data-Free Black-Box Attack with Domain-Augmented Learning

Authors: Yang Wei, Jingyu Tan, Guowen Xu, Zhuoran Ma, Zhuo Ma, Bin Xiao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that our method is more effective. In non-targeted attacks on the CIFAR-10 and Tiny-Image Net datasets, our method outperforms the state-of-the-art by 14% and 7% in attack success rate, respectively.
Researcher Affiliation Collaboration Yang Wei 1, Jingyu Tan 1, Guowen Xu 2, Zhuoran Ma 3 , Zhuo Ma 3, Bin Xiao 1, 4 1 School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China 2 School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China 3 School of Cyber Engineering, Xidian University, Xi an, China 4 Jinan Inspur Data Technology Co., Ltd., Jinan, China
Pseudocode Yes Algorithm 1: The proposed data-free black-box attack.
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the methodology described.
Open Datasets Yes Datasets and Model Architectures. We test our method on public datasets and classic models. 1) MNIST (Le Cun et al. 1998): 2) CIFAR-10 (Krizhevsky, Hinton et al. 2009): 3) CIFAR-100 (Krizhevsky, Hinton et al. 2009): 4) Tiny Imagenet (Russakovsky et al. 2015a): ... and Image Net (Russakovsky et al. 2015b) to train S for black-box attacks.
Dataset Splits Yes Datasets and Model Architectures. We test our method on public datasets and classic models. 1) MNIST (Le Cun et al. 1998): ... 2) CIFAR-10 (Krizhevsky, Hinton et al. 2009): ... 3) CIFAR-100 (Krizhevsky, Hinton et al. 2009): ... 4) Tiny Imagenet (Russakovsky et al. 2015a):
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Implementation details. Our S and G are trained using the Adam with a learning rate of 0.0001. We set the minibatch size as 500, and the S for 120 epochs on MNIST, 300 epochs on CIFAR-10/100 datasets, and 400 epochs on Tiny Image Net dataset. In Adaptive Semantic Embedding (ASE), we evenly divide the epochs into three stages based on the dataset. During each stage, the weight factor δ gradually grows from 0 to 1. The hyper-parameter β in Heterogeneity Excitation (HE) is set to 0.5, while the hyper-parameters λ1, λ2, and λ1 are equally as 1.