Hierarchical Uncertainty Estimation for Learning-based Registration in Neuroimaging

Authors: Xiaoling Hu, Karthik Gopinath, Peirong Liu, Malte Hoffmann, Koen Van Leemput, Oula Puonti, Juan Iglesias

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on publicly available data sets show that Monte Carlo dropout correlates very poorly with the reference registration error, whereas our uncertainty estimates correlate much better. Crucially, the results also show that uncertainty-aware fitting of transformations improves the registration accuracy of brain MRI scans. Finally, we illustrate how sampling from the posterior distribution of the transformations can be used to propagate uncertainties to downstream neuroimaging tasks. ... Section 4.1 RESULTS ... Section 4.3 REGISTRATION ACCURACY ... Table 1: Registration performance for transformations with and without uncertainties.
Researcher Affiliation Academia 1Massachusetts General Hospital and Harvard Medical School 2Aalto University 3Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital 4Hawkes Institute, University College London 5Computer Science and AI Laboratory, Massachusetts Institute of Technology
Pseudocode No The paper describes methods and mathematical formulations in prose, but does not include any explicitly labeled pseudocode blocks, algorithms, or structured code-like procedures.
Open Source Code Yes Code is available at: https://github.com/Hu Xiaoling/Regre4Regis.
Open Datasets Yes The training data consists of high-resolution, isotropic, T1-weighted scans of 897 subjects from the HCP dataset (Van Essen et al., 2013) and 1148 subjects from the ADNI (Jack Jr et al., 2008), while the test data set includes the ABIDE (Di Martino et al., 2014) and OASIS3 (La Montagne et al., 2019) data sets.
Dataset Splits Yes The total training set consists of the coordinates, segmentations, and brain masks, and is split 80/20% between training and validation. ... We selected the first 100 scans from both data sets for evaluation so that the test data set matches that of Gopinath et al. (2024).
Hardware Specification Yes In our experiments, training took approximately 1.2h per epoch on an NVIDIA RTX A6000 GPU
Software Dependencies No We use the standard U-net (Ronneberger et al., 2015) as our backbone. ... using Nifty Reg (Modat et al., 2010). ... Synth Seg Billot et al. (2023). The paper mentions software tools used but does not provide specific version numbers for these or other libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes We empirically set λmask = 0.5, λseg = 5 and λuncer = 0.1. The learning rate is 0.01. The parameters are chosen via the validation performances. ... The final activation layer is linear, to regress the atlas coordinates in decimeters (which roughly normalizes them from -1 to 1). ... It has four resolution levels with two convolutional layers (comprising 3x3x3 convolutions and a Re Lu) followed by 2x2x2 max pooling (in the encoder) or upconvolution (decoder). ... training took approximately 1.2h per epoch on an NVIDIA RTX A6000 GPU, with convergence typically requiring 50 epochs.