Adversarially Robust Spiking Neural Networks Through Conversion
Authors: Ozan Ozdenizci, Robert Legenstein
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experimental evaluations in a novel setting proposed to rigorously assess the robustness of SNNs, where numerous adaptive adversarial attacks that account for the spike-based operation dynamics are considered. Results show that our approach yields a scalable state-of-the-art solution for adversarially robust deep SNNs with low-latency. |
| Researcher Affiliation | Academia | Ozan Özdenizci ozan.özdenizci@igi.tugraz.at Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria Robert Legenstein EMAIL Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria |
| Pseudocode | Yes | Our complete robust ANN-to-SNN conversion procedure is outlined in Algorithm 1. |
| Open Source Code | Yes | Our implementations can be found at: https://github.com/IGITUGraz/Robust SNNConversion. |
| Open Datasets | Yes | We performed experiments with CIFAR-10, CIFAR-100, SVHN and Tiny Image Net datasets. CIFAR-10 and CIFAR-100 datasets both consist of 50,000 training and 10,000 test images of resolution 32x32, from 10 and 100 classes respectively (Krizhevsky, 2009). SVHN dataset consists of 73,257 training and 26,032 test samples of resolution 32x32 from 10 classes (Netzer et al., 2011). Tiny Image Net dataset consists of 100,000 training and 10,000 test images of resolution 64x64 from 200 classes (Le & Yang, 2015). |
| Dataset Splits | Yes | CIFAR-10 and CIFAR-100 datasets both consist of 50,000 training and 10,000 test images of resolution 32x32, from 10 and 100 classes respectively (Krizhevsky, 2009). SVHN dataset consists of 73,257 training and 26,032 test samples of resolution 32x32 from 10 classes (Netzer et al., 2011). Tiny Image Net dataset consists of 100,000 training and 10,000 test images of resolution 64x64 from 200 classes (Le & Yang, 2015). |
| Hardware Specification | Yes | All models were implemented with the Py Torch 1.13.0 (Paszke et al., 2019) library, and experiments were performed using GPU hardware of types NVIDIA A40, NVIDIA Quadro RTX 8000 and NVIDIA Quadro P6000. |
| Software Dependencies | Yes | All models were implemented with the Py Torch 1.13.0 (Paszke et al., 2019) library, and experiments were performed using GPU hardware of types NVIDIA A40, NVIDIA Quadro RTX 8000 and NVIDIA Quadro P6000. Adversarial attacks were implemented using the Torch Attacks (Kim, 2020) library, with the default attack hyper-parameters. |
| Experiment Setup | Yes | During robust optimization with standard AT, TRADES, and MART, adversarial examples are iteratively crafted for each mini-batch (inner maximization step of Eq. (4)), by using 10 PGD steps with random starts under l -bounded perturbations, and η = 2.5 ϵ/#steps. For standard AT and MART we perform the inner maximization PGD using LPGD = LCE(fθ( x), y), whereas for TRADES we use LPGD = DKL(fθ( x)||fθ(x)) as the inner maximization loss, following the original works (Madry et al., 2018; Zhang et al., 2019a; Wang et al., 2020). We set the trade-off parameter λMART = 4 in all experiments, and λTRADES = 6 in CIFAR-10 and λTRADES = 3 in CIFAR-100 and SVHN experiments. |