Adversaries With Incentives: A Strategic Alternative to Adversarial Robustness
Authors: Maayan Ehrenberg, Roy Ganz, Nir Rosenfeld
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a series of experiments that show how even mild knowledge regarding the opponent s incentives can be useful, and that the degree of potential gains depends on how these incentives relate to the structure of the learning task. To demonstrate the effectiveness of our approach, we perform a thorough empirical evaluation using multiple datasets and architectures and in various strategic settings. |
| Researcher Affiliation | Academia | Maayan Ehrenberg, Roy Ganz, Nir Rosenfeld Faculty of Computer Science Technion Israel Institute of Technology {maayan.eh,ganz,nirr}@{campus,campus,cs}.technion.ac.il |
| Pseudocode | No | The paper describes methods and optimization steps, for example, in Section 5.1 (Optimization) and Appendix C.7 (Utilities in [0, 1]), including equations like (9) and (12) and step-by-step procedures. However, these descriptions are embedded in the text and not presented in clearly labeled pseudocode or algorithm blocks with distinct formatting. |
| Open Source Code | Yes | Code is available at https://github.com/maayango285/Adversaries-With-Incentives. |
| Open Datasets | Yes | We experiment with two datasets: CIFAR-10 (Krizhevsky et al., 2014) and GTSRB (Houben et al., 2013). (footnote 3: https://www.cs.toronto.edu/ kriz/cifar.html) (footnote 4: https://benchmark.ini.rub.de) |
| Dataset Splits | Yes | For both datasets (CIFAR-10 and GTSRB) we use the standard split. For CIFAR-10: 50,000 train and 10,000 test examples. For GTSRB: 39,209 train and 12,630 test images. |
| Hardware Specification | Yes | We ran all the experiments on a cluster of NVIDIA RTX A4000 16GB GPU machines, where each run used between 1-2 GPUs in parallel. |
| Software Dependencies | No | The paper mentions using VGG, ResNet18, and ViT architectures, and PGD attack for implementation. However, it does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Optimization for all models was done using an SGD optimizer with momentum 0.9 and a base learning rate of 0.01. Learning rate r was adjusted using a multi-step scheduler... We used 50 epochs of training throughout, except for ViT on CIFAR-10 which required an additional 50 epochs... For batch size (bs), we used mostly 64 samples per batch. For the adversarial and strategic settings, we adopt the following standard adversarial configuration. We set the threat model as the L norm-ball with an adversarial budget of 8/255. For training, we use a PGD attack using 7 steps with a step size of 0.011. For evaluation, we use 20 steps with a step size of 0.0039. |