DNR-Pruning: Sparsity-Aware Pruning via Dying Neuron Reactivation in Convolutional Neural Networks

Authors: Boyuan Wang, Richard Jiang

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on diverse datasets demonstrate that DNRPruning outperforms existing sparsity-aware pruning techniques while achieving competitive results compared to state-of-the-art methods. These findings suggest that dying neurons can serve as an efficient mechanism for network compression and resource optimization in CNNs, opening new avenues for more efficient and high-performance deep learning models.
Researcher Affiliation Academia Boyuan Wang EMAIL LIRA Centre University of Lancaster Richard Jiang EMAIL LIRA Center University of Lancaster
Pseudocode Yes Algorithm 1 DNR-Pruning Algorithm
Open Source Code Yes Codes, data and more results can be found at https://github.com/wangbst/DNR-Pruning/.
Open Datasets Yes We trained MLPNet, Res Net-18 and VGG-16 networks on MNIST, CIFAR-10 and Tiny-Image Net.
Dataset Splits No The paper mentions using well-known datasets like MNIST, CIFAR-10, Tiny-Image Net, and Image Net for experiments. However, it does not explicitly state the specific training/validation/test splits, percentages, or methodology used for these datasets beyond implying standard usage. For example, it does not mention '80/10/10 split' or 'standard splits from [citation]'.
Hardware Specification Yes We used 30G bytes of memory, an NVidia V100 GPU and an Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz CPU.
Software Dependencies No The paper does not explicitly state specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x).
Experiment Setup Yes We trained MLPNet, Res Net-18 and VGG-16 networks on MNIST, CIFAR-10 and Tiny-Image Net. Following the training scheme outlined by Evci et al Evci (2020). Based on the results in Table 7, our proposed method, DNR-Pruning, achieves the highest Top-1 accuracy among all compared pruning approaches on Res Net-56 with Image Net, including a recent SOTA method published at WACV 2024. Notably, it reaches this performance within only 90 training epochs...