ANCER: Anisotropic Certification via Sample-wise Volume Maximization
Authors: Francisco Eiras, Motasem Alfarra, Philip Torr, M. Pawan Kumar, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results demonstrate that An Cer achieves state-of-the-art ℓ1 and ℓ2 certified accuracy on CIFAR-10 and Image Net in the data-dependence setting, while certifying larger regions in terms of volume, highlighting the benefits of moving away from isotropic analysis. |
| Researcher Affiliation | Collaboration | Francisco Eiras EMAIL University of Oxford, Five AI Ltd., UK; Motasem Alfarra EMAIL King Abdullah University of Science and Technology (KAUST) |
| Pseudocode | Yes | Algorithm 1: ANCER Optimization |
| Open Source Code | Yes | Our code is available in this repository. (Abstract) and Source code to reproduce the An Cer optimization and certification results of this paper is available as supplementary material. (Appendix E) |
| Open Datasets | Yes | Our empirical results demonstrate that An Cer achieves state-of-the-art ℓ1 and ℓ2 certified accuracy on CIFAR-10 (Krizhevsky, 2009) and Image Net (Deng et al., 2009) in the data-dependence setting... (Abstract) and The experiments reported in the paper used the CIFAR-10 Krizhevsky (2009)3 and Image Net Deng et al. (2009)4 datasets...3Available here (url), under an MIT license. 4Available here (url), terms of access detailed in the Download page. (Appendix E) |
| Dataset Splits | Yes | Experiments used the typical data split for these datasets found in the Py Torch implementation Paszke et al. (2019). The performance of each method per σ is presented in Appendix G. (Section 7) and certifying the entire CIFAR-10 test set and a subset of 500 examples from the Image Net test set. (Section 7) |
| Hardware Specification | Yes | As such, we report the average certification runtime for a test set sample on an NVIDIA Quadro RTX 6000 GPU for Fixed σ, Isotropic DD and An Cer (including the isotropic initialization step) in Table 3. |
| Software Dependencies | No | Experiments used the typical data split for these datasets found in the Py Torch implementation Paszke et al. (2019). (Appendix E). The paper mentions PyTorch but does not specify a version number. |
| Experiment Setup | Yes | We trained the Res Net18 networks for 120 epochs, with a batch size of 256 and stochastic gradient descent with a learning rate of 10-2, and momentum of 0.9. (Appendix E.1) and Following the procedures described in the original work, we trained the Wide Res Net40 models with the stability loss used in Yang et al. (2020) for 120 epochs, with a batch size of 128 and stochastic gradient descent with a learning rate of 10-2, and momentum of 0.9, along with a step learning rate scheduler with a γ of 0.1. (Appendix E.2) |