Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

Authors: Anh Tuan Bui, Trung Le, He Zhao, Quan Hung Tran, Paul Montague, Dinh Phung

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments for our Task Oriented MOO on various adversarial example generation schemes. The experimental results firmly demonstrate the merit of our proposed approach. Our code is available at https://github.com/tuananhbui89/TAMOO. (Abstract) ... In this section, we provide extensive experiments across four settings: (i) generating adversarial examples for ensemble of models (ENS, Sec 4.1), (ii) generating universal perturbation (UNI, Sec 4.3) , (iii) generating robust adversarial examples against Ensemble of Transformations (Eo T, Sec 4.4), and (iv) adversarial training for ensemble of models (AT, Sec 4.2). The details of each setting can be found in Appendix C.
Researcher Affiliation Collaboration Anh Bui EMAIL Monash University Trung Le EMAIL Monash University He Zhao EMAIL CSIRO s Data61, Australia Quan Tran EMAIL Adobe Research Paul Montague EMAIL Defence Science and Technology Group, Australia Dinh Phung EMAIL Monash University, Vin AI Research
Pseudocode Yes Algorithm 1 Pseudocode for Parameterized TA-MOO. Input: Multi-objective functions f1:m (δ). δ s solver with L update steps and learning rate ηδ. w s Gradient Descent Solver (GD) with K update steps and learning rate ηw and variable α. The softmax function denotes by σ. Tradeoffparameter λ. Output: The optimal solution δ .
Open Source Code Yes Our code is available at https://github.com/tuananhbui89/TAMOO.
Open Datasets Yes We evaluate on the full testing set of two benchmark datasets which are CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). ... D.8 Attacking the Image Net dataset.
Dataset Splits Yes We evaluate on the full testing set (10k) of two benchmark datasets which are CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). More specifically, the two datasets have 50k training images and 10k testing images, respectively, with the same image resolution of 32 32 3. ... We follow the experimental setup in Wang et al. (2021), where the full test set (10k images) is randomly divided into equal-size groups (K images per group). ... We use 5000 images of the validation set to evaluate.
Hardware Specification Yes Table 16 shows the average time to generate one adversarial example in each setting. The results are measured on the CIFAR10 dataset with Res Net18 architecture in the Ensemble of Transformations (Eo T) and Universal Perturbation (Uni) settings. We use 1 Titan RTX 24GB for the Eo T experiment and 4 Tesla V100 16GB each for the other experiments.
Software Dependencies No No specific software dependencies with version numbers are explicitly mentioned in the paper's text. While a GitHub repository for an implementation is linked ("with the implementation 1. https://github.com/kuangliu/pytorch-cifar"), the paper itself does not list software names with version numbers.
Experiment Setup Yes The attack parameters are the same among methods, i.e., number of attack steps 100, attack budget ϵ = 8/255 and step size ηδ = 2/255. In our method, we use K=10 to update the weight in each step with learning rate ηw = 0.005. Tradeoffparameter λ = 100 in all experiments. In Min Max (Wang et al., 2021), we use the same γ = 3 for all settings and use the authors implementation 2. ... Specifically, we use the SGD optimizer (momentum 0.9 and weight decay 5 10 4) and Cosine Annealing Scheduler to adjust the learning rate with an initial value of 0.1 and train a model in 200 epochs as suggested in the implementation above.