Contrasting Adversarial Perturbations: The Space of Harmless Perturbations

Authors: Lu Chen, Shaofeng Li, Benhao Huang, Fan Yang, Zheng Li, Jie Li, Yuan Luo

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various DNNs verify Corollaries 1 and 2. In Figure 2, the dimension of the harmless perturbation... We trained the CIFAR-10 dataset on various networks and tested the effect of varying magnitudes on (a) harmless perturbations and (b) the least harmful perturbations. Furthermore, we evaluated the root mean squared error (RMSE) between the network outputs of the perturbed images ˆyx and the network outputs of natural images yx on the Res Net-50, i.e., RMSE=Ex[ 1 n||ˆyx yx||]. Table 1 further demonstrates that compared to adversarial perturbations and Gaussian noise, harmless perturbations completely did not change the network output with negligible errors, and the least harmful perturbation had a weak impact on the network output as the perturbation magnitude increased.
Researcher Affiliation Academia 1Shanghai Jiao Tong University, China 2Shanghai Jiao Tong University (Wuxi) Blockchain Advanced Research Center, China 3Southeast University, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 in Appendix also shows the pseudo-code of generating harmless perturbations.
Open Source Code Yes Code https://github.com/csluchen/harmless-perturbations
Open Datasets Yes we verified the dimension of the harmless subspace for convolutional layers using various DNNs, including Res Net-18/50 (He et al. 2016), VGG-16 (Simonyan and Zisserman 2014) and Efficient Net (Tan and Le 2019), on the CIFAR-10 dataset (Krizhevsky, Hinton et al. 2009). Furthermore, we verified the dimension of the harmless perturbation subspace for fullyconnected layers using the MLP-5 on various datasets, including the MNIST dataset (Le Cun and Cortes 2010), the CIFAR-10/100 dataset (Krizhevsky, Hinton et al. 2009) and the SHVN dataset (Netzer et al. 2011)
Dataset Splits No The paper mentions using specific datasets (CIFAR-10, MNIST, CIFAR-100, SHVN) but does not provide explicit details on how these datasets were split into training, validation, and test sets, nor does it explicitly state the use of standard splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper mentions training models and making architectural modifications like setting strides, but does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings used in the experiments.