Enhancing the Adversarial Robustness via Manifold Projection

Authors: Zhiting Li, Shibai Yin, Tai-Xiang Jiang, Yexun Hu, Jia-Mian Wu, Guowei Yang, Guisong Liu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments illustrate that our proposed adversarial defense paradigm significantly improves the robustness compared with previous state-of-the-art AT and AD methods. 4 Experimental Evaluations Experimental Setup We evaluate the effectiveness of our proposed adversarial defense paradigm using three benchmark image datasets: CIFAR-10, CIFAR-1002 (Alex 2009), and Tiny Image Net (Le and Yang 2015).
Researcher Affiliation Academia School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics, Chengdu, P.R.China Kash Institute of Electronics and Information Industry, Kash, P.R.China Engineering Research Center of Intelligent Finance, Ministry of Education, Chengdu, P.R.China
Pseudocode No The paper includes equations and figures illustrating the methodology, but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/Tai Xiang Jiang/Enhancing-the-Adversarial Robustness-via-Manifold-Projection/
Open Datasets Yes Experimental Setup We evaluate the effectiveness of our proposed adversarial defense paradigm using three benchmark image datasets: CIFAR-10, CIFAR-1002 (Alex 2009), and Tiny Image Net (Le and Yang 2015).
Dataset Splits Yes We conducted a statistic analysis using 1,000 mini-batches from the CIFAR-100 training set. Experimental Setup We evaluate the effectiveness of our proposed adversarial defense paradigm using three benchmark image datasets: CIFAR-10, CIFAR-1002 (Alex 2009), and Tiny Image Net (Le and Yang 2015).
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or processor types used for running its experiments.
Software Dependencies No The paper mentions using the Stochastic Gradient Descent (SGD) optimizer but does not specify any software frameworks (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes Implementation Details The networks are trained using the Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of 0.1, momentum of 0.9, and weight decay of 5 10 4. Unless otherwise specified, for PGD-AT, we train for 110 epochs, reducing the learning rate by a factor of 10 at the 100th and 105th epochs. For TRADES3 and the other AD methods, we train for 200 epochs, with learning rate reductions at the 100th and 150th epochs. The inner optimization involves 10 iterations with a step size of 2/255, and the total perturbation bound ϵ = 8/255 under the L constraint. For CIFAR-10, the distillation temperature τ is set to 30 in all distillation methods, with α = 5/6 in RSLAD, and α = 1.0 in KD, ARD, and Ada AD. For CIFAR-100 and Tiny Image Net, we set τ = 5 in all distillation methods, with α = 0.95 in KD, α = 5/6 in RSLAD, and α = 1.0 in ARD and Ada AD.