GSE: Group-wise Sparse and Explainable Adversarial Attacks

Authors: Shpresim Sadiku, Moritz Wagner, Sebastian Pokutta

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Rigorous evaluations on CIFAR-10 and Image Net datasets demonstrate a remarkable increase in groupwise sparsity, e.g., 50.9% on CIFAR-10 and 38.4% on Image Net (average case, targeted attack). This performance improvement is accompanied by significantly faster computation times, improved explainability, and a 100% attack success rate.
Researcher Affiliation Academia Shpresim Sadiku1,2, Moritz Wagner1,2 & Sebastian Pokutta1,2 1Department for AI in Society, Science, and Technology, Zuse Institute Berlin, Germany 2Institute of Mathematics, Technische Universität Berlin, Germany EMAIL
Pseudocode Yes Algorithm 1 Forward-Backward Splitting Attack
Open Source Code Yes All tests are conducted on a machine with an NVIDIA A40 GPU, and our codes, 10k image indices from the Image Net validation dataset, and target labels for targeted Image Net tests are available at https://github.com/wagnermoritz/GSE.
Open Datasets Yes We experiment on CIFAR-10 (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009) datasets, analyzing DNNs on 10k randomly selected images from both validation sets.
Dataset Splits Yes We experiment on CIFAR-10 (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009) datasets, analyzing DNNs on 10k randomly selected images from both validation sets. For the classifier C on CIFAR-10, we train a Res Net20 model (He et al., 2016) for 600 epochs using stochastic gradient descent, with an initial learning rate of 0.01, reduced by a factor of 10 after 100, 250, and 500 epochs. We set the weight decay to 10 4, momentum to 0.9, and batch size to 512. For Image Net, we employ a Res Net50 (He et al., 2016) and a more robust transformer model, Vi T_B_16 (Dosovitskiy et al., 2020), both with default weights from the torchvision library.
Hardware Specification Yes All tests are conducted on a machine with an NVIDIA A40 GPU
Software Dependencies No The paper mentions "Py Torch" in Section 3.3, but does not specify a version number.
Experiment Setup Yes For the classifier C on CIFAR-10, we train a Res Net20 model (He et al., 2016) for 600 epochs using stochastic gradient descent, with an initial learning rate of 0.01, reduced by a factor of 10 after 100, 250, and 500 epochs. We set the weight decay to 10 4, momentum to 0.9, and batch size to 512. Specifically, for CIFAR-10, we set q = 0.25, σ = 0.005, µ = 1, and ˆk = 30, while for Image Net, we use q = 0.9, σ = 0.05, µ = 0.1, and ˆk = 50. We run all the attacks for a total of 200 iterations.