Active Diffusion Subsampling
Authors: Oisín Nolan, Tristan Stevens, Wessel L. van Nierop, Ruud Van Sloun
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method with three sets of experiments, covering a variety of data distributions and application domains. |
| Researcher Affiliation | Academia | Oisín Nolan EMAIL Tristan S.W. Stevens EMAIL Wessel L. van Nierop EMAIL Ruud J.G. van Sloun EMAIL Eindhoven University of Technology |
| Pseudocode | Yes | Algorithm 1: Active Diffusion Subsampling |
| Open Source Code | Yes | Code is available at https://active-diffusion-subsampling.github.io/. |
| Open Datasets | Yes | Experimentally, we show that through designing informative subsampling masks, ADS significantly improves reconstruction quality compared to fixed sampling strategies on the MNIST and Celeb A datasets, as measured by standard image quality metrics, including PSNR, SSIM, and LPIPS. Furthermore, on the task of Magnetic Resonance Imaging acceleration, we find that ADS performs competitively with existing supervised methods in reconstruction quality while using a more interpretable acquisition scheme design procedure. [...] we trained a diffusion model on the samples from the CAMUS (Leclerc et al., 2019) dataset |
| Dataset Splits | Yes | We use the same data train / validation / test split and data preprocessing as Yin et al. (2021) for comparability. In particular, the data samples are κ-space slices cropped and centered at 128 128, with 34, 732 train samples, 1, 785 validation samples, and 1, 851 test samples. [...] we trained a diffusion model on the samples from the CAMUS (Leclerc et al., 2019) dataset, with the train set consisting of 7448 frames from cardiac ultrasound scans across 500 patients, resized to 128 128 in the polar domain. [...] we benchmark ADS against DPS using random and data-variance fixed-mask strategies on a test set consisting of frames from 50 unseen patients |
| Hardware Specification | Yes | Our model for fast MRI (Table 2) uses 40ms / step with 76 steps per acquisition, leading to 3040ms per acquisition on our NVIDIA Ge Force RTX 2080 Ti GPU. [...] Each model was trained using one Ge Force RTX 2080 Ti (NVIDIA, Santa Clara, CA, USA) with 11 GB of VRAM. |
| Software Dependencies | Yes | The methods and models are implemented in the Keras 3.1 (Chollet et al., 2015) library using the Jax backend (Bradbury et al., 2018). |
| Experiment Setup | Yes | Inference was performed using Diffusion Posterior Sampling for measurement guidance, with guidance weight ζ = 1 and T = 1000 reverse diffusion steps. For ADS, measurements were taken at regular intervals in the window [0, 800], with Np = 16 particles and σy = 10. [...] The model was trained for 500 epochs with the following parameters: widths=[32, 64, 128], block_depth=2, diffusion_steps=30, ema_0.999, learning_rate=0.0001, weight_decay=0.0001, loss="mae". |