UncertainSAM: Fast and Efficient Uncertainty Quantification of the Segment Anything Model
Authors: Timo Kaiser, Thomas Norrenbrock, Bodo Rosenhahn
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposed deterministic USAM demonstrates superior predictive capabilities on the SA-V, MOSE, ADE20k, DAVIS, and COCO datasets, offering a computationally cheap and easy-to-use UQ alternative that can support user-prompting, enhance semi-supervised pipelines, or balance the tradeoff between accuracy and cost efficiency. |
| Researcher Affiliation | Academia | 1Institute for Information Processing / L3S Leibniz University Hannover, Germany. Correspondence to: Timo Kaiser <EMAIL>. |
| Pseudocode | No | The paper describes methods and processes in descriptive text and figures, but it does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Easy-to-use code is available here1. 1https://github.com/Green Auto ML4FAS/Uncertain SAM |
| Open Datasets | Yes | Our proposed deterministic USAM demonstrates superior predictive capabilities on the SA-V, MOSE, ADE20k, DAVIS, and COCO datasets, offering a computationally cheap and easy-to-use UQ alternative that can support user-prompting, enhance semi-supervised pipelines, or balance the tradeoff between accuracy and cost efficiency. |
| Dataset Splits | Yes | During SMAC3 optimization, we split the training set into 80% training and 20% validation subsets. Our best trained models on the large-scale dataset SA-V are publicly available in our code repository. |
| Hardware Specification | Yes | Table 7. Runtime of SAM with and without UQ methods on a regular image performed on a NVIDIA RTX3050 Ti. |
| Software Dependencies | No | The paper mentions software components like 'SGD optimizer' and 'Bayesian optimization framework SMAC3', but does not provide specific version numbers for these or other libraries/programming languages used. |
| Experiment Setup | Yes | The MLPs of USAM are trained with the SGD optimizer (weight decay 0.001). Hyperparameters are optimized using the Bayesian optimization framework SMAC3 (Lindauer et al., 2022). The number of epochs is limited to between 5 and 80, the batch size between 16 and 256, the learning rate between 0.0001 and 0.1, and SGDs momentum between 0.1 and 0.9. During SMAC3 optimization, we split the training set into 80% training and 20% validation subsets. |