Self-Supervised Diffusion MRI Denoising via Iterative and Stable Refinement

Authors: Chenxu Wu, Qingpeng Kong, Zihang Jiang, S Kevin Zhou

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our thorough experiments on real and simulated data demonstrate that Di-Fusion achieves stateof-the-art performance in microstructure modeling, tractography tracking, and other downstream tasks. ... 4 EXPERIMENTS ... 4.3 QUANTITATIVE AND QUALITATIVE RESULTS ON in-vivo DATA ... 4.4 QUANTITATIVE RESULTS ON SIMULATED DATA ... 4.5 ABLATION STUDIES
Researcher Affiliation Academia Chenxu Wu1,2, Qingpeng Kong1,2, Zihang Jiang1,2, & S. Kevin Zhou1,2,3,4, 1School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC 2MIRACLE Center, Suzhou Institute for Advance Research, USTC 3State Key Laboratory of Precision and Intelligent Chemistry, USTC 4Key Laboratory of Intelligent Information Processing of CAS, Institute of Computing Technology, CAS EMAIL,EMAIL
Pseudocode Yes Algorithm 1 Training process Initialize Fθ randomly; input 4D data: X Rw h d l repeat ... Algorithm 2 Sampling process Load pre-trained Fθ; input: X Rw h d l, i, j and CSNR ...
Open Source Code Yes Code is available at https://github.com/Fouier L/Di-Fusion.
Open Datasets Yes To thoroughly evaluate Di-Fusion, we perform experiments on three publicly available brain d MRI datasets acquired using different, commonly-used acquisition schemes: (i) High-Angular Resolution Diffusion Imaging (Stanford HARDI, X R106 81 76 150 (Rokem, 2016)); (ii) Multi Shell (Sherbrooke 3-Shell dataset, X R128 128 64 193 (Garyfallidis et al., 2014)); (iii) Single Shell (Parkinson s Progression Markers Initiative (PPMI) dataset, X R116 116 72 64 (Marek et al., 2011)). Simulated experiments are carried out on the fast MRI datasets (Tibrewala et al., 2023; Zbontar et al., 2018).
Dataset Splits Yes In order to quantify the results, we perform a 3-fold cross-validation (Hastie et al., 2009) at two exemplary voxel locations, corpus callosum (CC), a single-fiber structure, and centrum semiovale (CSO), a crossing-fiber structure. The data is divided into three different subsets for the selected voxels, and data from two folds are used to fit the model, which predicts the data on the held-out fold.
Hardware Specification Yes All experiments were performed on RTX Ge Force 3090 GPUs in Py Torch (Paszke et al., 2019).
Software Dependencies No All experiments were performed on RTX Ge Force 3090 GPUs in Py Torch (Paszke et al., 2019). The training duration for one Fθ is approximately five hours on a single RTX Ge Force 3090 GPU with 5578MB of VRAM. ... We implemented MPPCA using the code from DIPY (Garyfallidis et al., 2014).
Experiment Setup Yes Adam optimizer was used to optimize θ with a fixed learning rate of 1e 4 and a batch size of 32, and Fθ was trained 1e5 steps from scratch. ... We set σ2 t = β1, ,T and hold β1, ,T as hyperparameters. Since we are performing a deterministic sampling process, η in Eq. (10) is set to 0 ... Tc = 300. ... During sampling, Tr = 50. β1 = 0.93 and β2 = 0.95 and changing their values has little impact on the results ... η = 0 and p = 10 if no special instructions are provided. CSNR are provided in the figure caption.