SAIF: Sparse Adversarial and Imperceptible Attack Framework

Authors: Tooba Imtiaz, Morgan R Kohler, Jared F Miller, Zifeng Wang, Masih Eskandar, Mario Sznaier, Octavia Camps, Jennifer Dy

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that SAIF computes highly imperceptible and interpretable adversarial examples, and largely outperforms state-of-the-art sparse attack methods on Image Net and CIFAR-10.
Researcher Affiliation Academia Tooba Imtiaz , Morgan R. Kohler , Jared F. Miller , Zifeng Wang , Masih Eskander, Mario Sznaier, Octavia Camps and Jennifer Dy 1 Department of Electrical & Computer Engineering, Northeastern University, Boston MA. Reviewed on Open Review: https://openreview.net/forum?id=YZL29e J5j1 Corresponding author. EMAIL. EMAIL. Work done while author was at Northeastern University. EMAIL. Work done while author was at Northeastern University. EMAIL. Work done while author was at Northeastern University. EMAIL.
Pseudocode Yes Algorithm 1: SAIF Adversarial attack using Frank-Wolfe for joint optimization.
Open Source Code Yes Implementation of SAIF is available at https://github.com/toobaimt/SAIF.
Open Datasets Yes We use the Image Net classification dataset (ILSVRC2012) (Krizhevsky et al., 2012) in our experiments, which has [299 299] RGB images belonging to 1,000 classes. We also report results on CIFAR-10 in the appendix. [...] We test SAIF and the existing sparse attack algorithms on the CIFAR-10 dataset (Krizhevsky et al., 2009).
Dataset Splits Yes We evaluate all attacks on 5,000 samples chosen from the validation set. For classification, we test on two deep convolutional neural network architectures, namely Inception-v3 (top-1 accuracy: 77.9%) and Res Net-50 (top-1 accuracy: 74.9%). [...] We evaluate all algorithms on 10,000 samples from the test set.
Hardware Specification Yes The experiments are run on a single Tesla V100 SXM2 GPU, for an empirically chosen number of iterations T for each dataset.
Software Dependencies No We implement the experiments in Julia and use the Frank-Wolfe variants library (Besançon et al., 2021). We code the classifier and gradient computation backend in Python using Tensor Flow and Keras deep learning frameworks.
Experiment Setup Yes SAIF typically converges in 20 iterations, however, we relax the maximum iterations to T = 100 in our experiments.