Random-Set Neural Networks
Authors: Shireen Kudukkil Manchingal, Muhammad Mubashar, Kaizheng Wang, Keivan Shariatmadar, Fabio Cuzzolin
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our approach outperforms state-of-the-art Bayesian and Ensemble methods in terms of accuracy, uncertainty estimation and out-of-distribution (Oo D) detection on multiple benchmarks (CIFAR-10 vs SVHN/Intel-Image, MNIST vs FMNIST/KMNIST, Image Net vs Image Net-O). RS-NN also scales up effectively to large-scale architectures (e.g. Wide Res Net-28-10, VGG16, Inception V3, Efficient Net B2 and Vi T-Base-16), exhibits remarkable robustness to adversarial attacks and can provide statistical guarantees in a conformal learning setting. ... we present a large body of experimental results (based on a fair comparison principle in which all competing models are trained from scratch) which demonstrate how RS-NN outperforms both state-of-the-art Bayesian (LB-BNN (Hobbhahn et al., 2022), FSVI (Rudner et al., 2022)) and Ensemble (DE (Lakshminarayanan et al., 2017), ENN (Osband et al., 2024)) methods in terms of: (i) performance (test accuracy, inference time) (Sec. 4.2); (ii) results on various out-of-distribution (Oo D) benchmarks (Sec. 4.3), including CIFAR-10 vs. SVHN/Intel-Image, MNIST vs. FMNIST/KMNIST, and Image Net vs. Image Net-O; (iii) ability to provide reliable measures of uncertainty quantification (Sec. 4.4) in the form of pignistic entropy and credal set width, verified on Oo D benchmarks; (iv) scalability to large-scale architectures (Wide Res Net-28-10, Inception V3, Efficient Net B2, Vi T-Base-16) and datasets (e.g. Image Net) (Sec. 4.5). |
| Researcher Affiliation | Academia | 1School of Engineering, Computing and Mathematics, Oxford Brookes University, UK 2M-Group and Distri Net Division, Department of Computer Science, KU Leuven, Belgium 3LMSD Division, Mechanical Engineering, KU Leuven 4Flanders Make@KU Leuven |
| Pseudocode | Yes | C ALGORITHMS This section outlines the algorithms integral to the implementation and evaluation of budgeting ( C.1), expected calibration error (ECE) ( C.2), area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) ( C.3). C.1 ALGORITHM FOR BUDGETING Algorithm 1 Budgeting Algorithm ... C.2 ALGORITHM FOR ECE Algorithm 2 Expected Calibration Error (ECE) ... C.3 ALGORITHM FOR AUROC, AUPRC Algorithm 3 Algorithm for AUROC, AUPRC |
| Open Source Code | Yes | The code for training RS-NN (train.ipynb), including pre-trained models (for faster evaluation), evaluation script (eval.ipynb), and configuration files are provided as supplementary materials. We also provide a .yml file to set up the environment to run the experiments. |
| Open Datasets | Yes | Our experiments are performed on multi-class image classification datasets, including MNIST (Le Cun & Cortes, 2005), CIFAR-10 (Krizhevsky et al., 2009), Intel Image (Bansal, 2019), CIFAR-100 (Krizhevsky, 2012), and Image Net (Deng et al., 2009). For out-of-distribution (Oo D) experiments, we assess several in-distribution (i D) vs Oo D datasets: CIFAR-10 vs SVHN (Netzer et al., 2011)/Intel-Image (Bansal, 2019), MNIST vs F-MNIST (Xiao et al., 2017)/K-MNIST (Clanuwat et al., 2018), and Image Net vs Image Net-O (Hendrycks et al., 2021). |
| Dataset Splits | Yes | The data is split into 40000:10000:10000 samples for training, testing, and validation respectively for CIFAR10 and CIFAR-100, 50000:10000:10000 samples for MNIST, 13934:3000:100 for Intel Image, 1172498:50000:108669 for Image Net. For Oo D datasets, we use 10,000 testing samples, except for Intel Image (3,000) and Image Net-O (2,000). |
| Hardware Specification | Yes | All models, including RS-NN, are trained on Res Net50 (on NVIDIA A100 80GB GPUs) with a learning rate scheduler initialized at 1e-3 with 0.1 decrease at epochs 80, 120, 160 and 180. ... GPU NVIDIA A100 80GB |
| Software Dependencies | No | The paper mentions optimizers like Adam and SGD in Table 5, but does not provide specific version numbers for software libraries such as Python, PyTorch, TensorFlow, or CUDA, which are necessary for full reproducibility. |
| Experiment Setup | Yes | All models, including RS-NN, are trained on Res Net50 (on NVIDIA A100 80GB GPUs) with a learning rate scheduler initialized at 1e-3 with 0.1 decrease at epochs 80, 120, 160 and 180. Standard data augmentation (Krizhevsky et al., 2012), including random horizontal/vertical shifts with a magnitude of 0.1 and horizontal flips, is applied to all models. ... All the models were trained from scratch for 200 epochs (recommended by most), with a batch size of 128. ... We set a budget K of 20 focal sets ... for CIFAR-10/ MNIST/ Intel Image, 200 for CIFAR-100 and 3000 for Image Net. RS-NN is trained from scratch on ground-truth belief encoding of sets using the LRS loss function (Eq. 7) over 200 epochs, with a batch size (bsize) of 128 and α = β = 1e 3 as hyperparameter values. |