On the Diversity of Adversarial Ensemble Learning

Authors: Jun-Qi Guo, Meng-Zhang Qian, Wei Gao, Zhi-Hua Zhou

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally conduct experiments to validate the effectiveness of our method.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China; School of Artificial Intelligence, Nanjing University, Nanjing, China. Correspondence to: Wei Gao <EMAIL>.
Pseudocode Yes Algorithm 1 The Adv EOAP method
Open Source Code Yes Code is available at https://github.com/Guo JQ42/Adv OAP.
Open Datasets Yes We conduct experiments on three datasets2: MNIST of 70000 images and 784 dimensions, F-MNIST of 70000 images and 784 dimensions, and CIFAR10 of 60000 images and 3072 dimensions. Three datasets have been well-studied in previous works (Strauss et al., 2017; Kariyappa & Qureshi, 2019; Yang et al., 2021; Deng & Mu, 2024). 2Download from https://paperswithcode.com/dataset.
Dataset Splits No The paper mentions datasets like MNIST, F-MNIST, and CIFAR10 with their total number of images, but does not explicitly state the training, validation, or test splits (e.g., percentages or sample counts) used for these datasets.
Hardware Specification Yes All experiments are performed on a server with 64 CPU cores (2 Intel Xeon Gold 6430 CPUs) and NVIDIA Ge Force RTX 4090 GPU, running Ubuntu 24.04 with 1TB main memory.
Software Dependencies No The paper mentions the operating system version "Ubuntu 24.04" but does not specify versions for other key software components like deep learning frameworks (e.g., PyTorch, TensorFlow) or scientific libraries, which are crucial for reproducibility.
Experiment Setup Yes For i GATADP, we take 150, 150 and 480 epoches for MNIST, F-MNIST and CIFAR10 for convergence; while for other ensemble methods, we take 60, 60 and 250 epoches for MNIST, F-MNIST and CIFAR10, respectively. ... For adversarial examples in training process, we take PGD10 with 10 steps with step-size 0.04, 0.01 and 0.008 for MNIST, F-MNIST and CIFAR10, respectively. We set α = 0.02 and λ = 10 for our method, and Table 5 summarizes parameter setting for others. ... We take the SGD method (Robbins & Monro, 1951) with batch size 256 and learning rate 0.01. ... The perturbation size is set to 8/255 and 128/255 for l and l2 norm, respectively.