PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks

Authors: Chen Feng, Ziquan Liu, Zhuo Zhi, Ilija Bogunovic, Carsten Gerner-Beuerle, Miguel Rodrigues

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct extensive experiments with PROSAC to certify the performance of various state-of-the-art vision models in the presence of various adversarial attacks; how the framework recovers existing trends relating to the robustness of different models against different adversarial attacks; and how the framework also suggests new trends relating to state-of-the-art model robustness against attacks.
Researcher Affiliation Academia 1Department of Electronic and Electrical Engineering, University College London 2School of Electronic Engineering and Computer Science, Queen Mary University of London 3Faculty of Laws, University College London 4AI Centre, Department of Electronic and Electrical Engineering, University College London EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: GP-UCB for hyperparameter optimization
Open Source Code No No explicit statement or link to the source code for PROSAC or its implementation is provided within the paper. The paper only refers to a third-party tool, advertorch, for implementing attackers.
Open Datasets Yes Datasets We will consider primarily classification tasks on the Image Net-1k dataset (Deng et al. 2009).
Dataset Splits Yes We follow the common experimental setting in black-box adversarial attacks, using 1,000 images from Image Net-1k (Andriushchenko et al. 2020; Ilyas et al. 2018) to apply our proposed certification procedure. In particular, we take our calibration set to correspond to this dataset.
Hardware Specification No The paper does not provide specific hardware details (such as GPU/CPU models, processor types, or memory amounts) used for running the experiments.
Software Dependencies No The paper mentions using 'advertorch' for attacker implementations but does not specify its version number, nor does it provide version numbers for any other key software dependencies like PyTorch.
Experiment Setup Yes We set α = 0.10 and ζ = 0.05 in our safety certification procedure, per Definition 1. The hyperparameters of each attacker were carefully selected to explore a wide range of configurations. Specifically, detailed range/values of hyperparameters for each attackers are shown in APPENDIX A. ... We choose βt to be 0.1 with hyper-parameter search from β={0.01,0.1,1.0}.