Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Robust Compressed Sensing MRI with Deep Generative Priors

Authors: Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jon Tamir

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform retrospective under-sampling in all experiments, i.e., given fully-sampled k-space measurements from the NYU fast MRI [56, 94] and Stanford MRI [1] datasets, we apply sampling masks and evaluate the performance of all considered algorithms on the reconstructed data.
Researcher Affiliation Academia Ajil Jalal ECE, UT Austin EMAIL Marius Arvinte* ECE, UT Austin EMAIL Giannis Daras CS, UT Austin EMAIL Eric Price CS, UT Austin EMAIL Alexandros G. Dimakis ECE, UT Austin EMAIL Jonathan I. Tamir ECE, UT Austin EMAIL
Pseudocode Yes Putting everything together, our final algorithm is: for x0 Nc(0, I) and for all t = 0, , T 1, xt+1 xt + ηt f(xt; βt) + AH(y Axt) + p 2ηt ζt, ζt N(0; I). (4)
Open Source Code Yes Our code and models are available at: https://github.com/utcsilab/csgm-mri-langevin.
Open Datasets Yes We perform retrospective under-sampling in all experiments, i.e., given fully-sampled k-space measurements from the NYU fast MRI [56, 94] and Stanford MRI [1] datasets
Dataset Splits No The paper states, “Specifically, we train using T2-weighted images at a field strength of 3 Tesla for a total of 14,539 2D training slices.” and “We train the Mo DL and E2E-Var Net baselines from scratch on the same training dataset as our method...”, but does not provide explicit percentages or counts for a validation dataset split.
Hardware Specification Yes When benchmarked on an NVIDIA RTX 2080Ti GPU, our method takes 16 minutes and 0.95 GB of memory to reconstruct a high-resolution brain scan
Software Dependencies Yes We use the publicly available implementation from the BART toolbox [88, 86]
Experiment Setup Yes We train the Mo DL and E2E-Var Net baselines from scratch on the same training dataset as our method, at acceleration factors R = {3, 6} and equispaced under-sampling, with a supervised SSIM loss on the magnitude MVUE image, for 40 and 15 epochs, respectively, using a batch size of 1. For the Conv Decoder baseline... optimize the number of fitting iterations... We find that 10000 iterations are sufficient...