Self-Healing Robust Neural Networks via Closed-Loop Control

Authors: Zhuotong Chen, Qianxiao Li, Zheng Zhang

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On two standard and one challenging datasets, we empirically verify that the proposed closed-loop control implementation of self-healing can consistently improve the robustness of the pre-trained models against various perturbations. Numerical Experiments. In this section, we test the performance of the proposed self-healing framework. Specifically, we show that using only one set of embedding functions can improve the robustness of many pre-trained models consistently. Section 6.1 shows that the proposed method can significantly improve the robustness of both standard and robustly trained models on CIFAR-10 against various perturbations. Furthermore, in the same experimental setting, Sections 6.2 and 6.3 evaluate our method on CIFAR-100 and Tiny-Image Net datasets, which empirically verify the effectiveness and generalizability of the self-healing machinery.
Researcher Affiliation Academia Zhuotong Chen EMAIL Department of Electrical and Computer Engineering University of California at Santa Barbara, Santa Barbara, CA, USA Qianxiao Li EMAIL Department of Mathematics, National University of Singapore Singapore, 119076 Zheng Zhang EMAIL Department of Electrical and Computer Engineering University of California at Santa Barbara, Santa Barbara, CA, USA
Pseudocode Yes Algorithm 1: The Method of Successive Approximation. Input : Input x0 (possibly perturbed), a trained neural network F( ), embedding functions {Et( )}T 1 t=0 , control regularization c, learning rate lr, max Itr, Inner Itr. Output: Output state x T . 1 Initialize controls {ut}T 1 t=0 with the greedy solution ; 2 for i = 0 to max Itr do 3 xi 0 = x0 + ui 0 ; // The controlled initial condition 4 for t = 0 to T 1 do 5 xi t+1 = Ft(xi t + ui t) ; // Controlled forward propagation Eq. (8) 7 pi T = 0 ; // The terminal condition of the adjoint state is set to 0 8 for t = T 1 to 0 do 9 for τ = 0 to Inner Itr do 10 H(t, xi t, pi t+1, θt, ui,τ t ) = pi t+1 Ft(xi t, θt, ui,τ t ) L(xi t, ui,τ t , Et(xi t)) ; // Compute Hamiltonian 11 ui,τ+1 t = ui,τ t + lr u H(t, xi t, pi t+1, θt, ui,τ t ) ; // Maximize Hamiltonian w.r.t. control ut 13 pi t = pi t+1 x F(xi t, θt, ui t) x L(xi t, ui t, Et(xi t)) ; // Backward propagation
Open Source Code Yes A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Self-Healing-Robust-Neural-Networks-via-Closed-Loop-Control.git
Open Datasets Yes Empirical validation on several datasets. On two standard and one challenging datasets, we empirically verify that the proposed closed-loop control implementation of self-healing can consistently improve the robustness of the pre-trained models against various perturbations. Numerical Experiments. In this section, we test the performance of the proposed self-healing framework. Specifically, we show that using only one set of embedding functions can improve the robustness of many pre-trained models consistently. Section 6.1 shows that the proposed method can significantly improve the robustness of both standard and robustly trained models on CIFAR-10 against various perturbations. Furthermore, in the same experimental setting, Sections 6.2 and 6.3 evaluate our method on CIFAR-100 and Tiny-Image Net datasets, which empirically verify the effectiveness and generalizability of the self-healing machinery. We consider the PASCAL Visual Object Detection (VOC) dataset and adopt the standard training protocol where we consider a union of the VOC 2007 and 2012 training dataset following (Liu et al., 2016). For testing, we use the VOC 2007 test with 4952 test images and 20 classes (Everingham et al., 2010).
Dataset Splits Yes We consider the PASCAL Visual Object Detection (VOC) dataset and adopt the standard training protocol where we consider a union of the VOC 2007 and 2012 training dataset following (Liu et al., 2016). For testing, we use the VOC 2007 test with 4952 test images and 20 classes (Everingham et al., 2010). We evaluate the performance of all models with clean testing data (None), and auto-attack (AA) (Croce and Hein, 2020b) that is measured by ℓ , ℓ2 and ℓ1 norms.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory) are provided for running the experiments. The paper only mentions "computational efficiency" related to image resizing for the VOC dataset, not the experimental hardware itself.
Software Dependencies No A Pytorch implementation can be found in:https://github.com/zhuotongchen/Self-Healing-Robust-Neural-Networks-via-Closed-Loop-Control.git. The paper mentions PyTorch as the implementation framework, but does not provide a specific version number. No other software dependencies with version numbers are listed.
Experiment Setup Yes PMP hyper-parameters setting. We choose 3 outer iterations and 10 inner iterations with 0.001 as control regularization parameters in the PMP solver. As in Algorithm 1, max Ite=3, Inner Itr=10, and c = 0.001. Robustness evaluations. We evaluate the performance of all models with clean testing data (None), and auto-attack (AA) (Croce and Hein, 2020b) that is measured by ℓ , ℓ2 and ℓ1 norms. Auto-attack that is an ensemble of two gradient-based auto-PGD attacks (Croce and Hein, 2020b), fast adaptive boundary attack (Croce and Hein, 2020a) and a black-box square attack (Andriushchenko et al., 2020). For Tiny-Image Net, the perturbations are ℓ : ϵ = 4/255, ℓ2 : ϵ = 0.8, ℓ1 : ϵ = 10. For CIFAR-10, ℓ : ϵ = 8/255, ℓ2 : ϵ = 0.5, ℓ1 : ϵ = 12.