A Halfspace-Mass Depth-Based Method for Adversarial Attack Detection

Authors: Marine Picot, Federica Granese, Guillaume Staerman, Marco Romanelli, Francisco Messina, Pablo Piantanida, Pierre Colombo

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate HAMPER in the context of supervised adversarial attacks detection across four benchmark datasets. Overall, we empirically show that HAMPER consistently outperforms SOTA methods. In particular, the gains are 13.1% (29.0%) in terms of AUROC (resp. FPR 95%) on SVHN, 14.6% (25.7%) on CIFAR10 and 22.6% (49.0%) on CIFAR100 compared to the best performing method.
Researcher Affiliation Academia Marine Picot marine.picot@{centralesupelec.fr, mail.mcgill.ca} Laboratoire des Signaux et Systèmes (L2S), Université Paris-Saclay CNRS Centrale Supélec Department of Electrical and Computer Science Mc Gill University, QC, Canada Federica Granese EMAIL Lix, Inria, Institute Polytechnique de Paris, Sapienza University of Rome Guillaume Staerman EMAIL Université Paris-Saclay, Inria, CEA Marco Romanelli EMAIL Laboratoire des Signaux et Systèmes (L2S), Université Paris-Saclay CNRS Centrale Supélec Francisco Messina EMAIL School of Engineering, Universidad de Buenos Aires CSC-CONICET, Buenos Aires, Argentina Pablo Piantanida EMAIL International Laboratory on Learning Systems (ILLS), CNRS, Centrale Supélec Pierre Colombo EMAIL MICS, Centrale Supélec
Pseudocode Yes Algorithm 1 Training algorithm for the approximation of DHM. Algorithm 2 Testing algorithm for the approximation of DHM. Algorithm 3 HAMPER
Open Source Code Yes The code is available at https://github.com/Marine PICOT/HAMPER.
Open Datasets Yes We run our experiments on three image datasets: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky, 2009).
Dataset Splits Yes We run our experiments on three image datasets: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky, 2009). ... In the attack-aware scenario, for each attack we train a detectors on a validation set composed of the first 1000 samples of the testing set and tested on the remaining samples.
Hardware Specification Yes Table 4: Time and computational constraints to train and test each detection method. Reported times include all required steps for each methods. Method GPUs Training Time Testing Time NSS V100-16G 00m30s 00m55s KD-BU V100-16G 00m30s 02m00s LID V100-16G 04m00s 35m00s HAMPERAA V100-16G 02m00s 02m00s HAMPERBA V100-16G 02m00s 02m00s
Software Dependencies No The paper mentions "pytorch-classification" in a footnote (1) related to a pre-trained ResNet-110 for CIFAR100, but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes For SVHN and CIFAR10 we use a Res Net18 trained for 100 epochs, using an SGD optimizer with a learning rate of 0.1, weight decay of 10 5, and a momentum of 0.9; for CIFAR100 we chose a Res Net-110 pre-trained using an SGD optimizer with a learning rate of 0.1, weight decay of 10 5, and a momentum of 0.9. Once trained, all classifiers are frozen.