Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?
Authors: Wendong Zheng, Junyang Chen, Husheng Guo, Wenjian Wang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our approach enhance the robustness of DBNN-based models on five classification tasks, with maximum improvements of 4.8% and 5.4% on the CIFAR100, Brain tumor MRI datasets, respectively. ... We conduct comprehensive experiments to evaluate the robustness of various DBNN-based models and our proposed method against environmental noise perturbations on popular image classification benchmarks (i.e., CIFAR-10 and CIFAR-100 datasets). |
| Researcher Affiliation | Academia | 1The School of Computer and Information Technology, Shanxi University, Taiyuan, China 2The College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China. |
| Pseudocode | Yes | To understand the process of applying environmental noise in the manuscript, we give the pseudo-code in Algo.1. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing their code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | To comprehensively evaluate our proposed approach, we construct experiments on a series of popular BNNs on the large-scale bio-electricity (Roeder, 2022) classification, CIFAR-10, CIFAR-100 and Image Net, Brain tumor MRI (Nickparvar, 2021) datasets with two common backbones (i.e., Res Net18 and Res Net34), see details of the setup in the Appendix. |
| Dataset Splits | Yes | The dataset is divided into training, validation, and test sets following a conventional 7:2:1 ratio. ... The CIFAR-10 is the most popular image classification dataset, which consists of 50,000 training samples and 10,000 testing samples of size 32 x 32 divided into 10 image classes. ... The Image Net dataset provide 1.2 million training samples and 50,000 validation samples, distributed across a total of 1,000 distinct classes to facilitate the exploration of complex research tasks. |
| Hardware Specification | Yes | All experiments are executed on a Linux server (i.e., Ubuntu 18.04.6 LTS) with one RTX 3090 GPUs. |
| Software Dependencies | Yes | we train all baselines and our method with SGD optimizer by using common hyper-parameters (i.e., momentum=0.9, weight decay=1e-4) under PyTorch1.13-GPU library on several classification benchmarks, namely CIFAR-10, CIFAR-100, Image Net, Bio-electricity series and Brain Tumor MRI datasets. |
| Experiment Setup | Yes | For image classification tasks, the batch size is set to 128 and then epoch is set to 400. Furthermore, the initial learning rate is 1e-1 on the three image benchmark datasets. In addition, we follow the popular cosine of learning rate decay strategy in the survey (Qin et al., 2023). To understand the process of applying environmental noise in the manuscript, we give the pseudo-code in Algo.1. |