Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction

Authors: Jiayang Shi, Junyi Zhu, Daniel Pelt, Joost Batenburg, Matthew B. Blaschko

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method achieves higher reconstruction quality with sparse views and remains robust to noise in the measurements as indicated by common numerical metrics. The obtained latent variables can also serve as network initialization for the new object and speed up the learning process.1 ... Our study utilizes four CT datasets: Walnut CT with walnut scans (Der Sarkissian et al., 2019), Aluminum CT of aluminum alloy at different fatigue-corrosion phases from Tomobank (De Carlo et al., 2018), Lung CT from the Medical Segmentation Decathlon (Antonelli et al., 2022) and 4DCT on the lung area (Castillo et al., 2009). Additionally, we include a natural image dataset Celeb A (Liu et al., 2015) to evaluate the generalizability to broader applications.
Researcher Affiliation Academia Jiayang Shi* EMAIL Leiden Institute of Advanced Computer Science Leiden University Junyi Zhu* EMAIL Center for Processing Speech and Images KU Leuven Daniël M. Pelt EMAIL Leiden Institute of Advanced Computer Science Leiden University K. Joost Batenburg EMAIL Leiden Institute of Advanced Computer Science Leiden University Matthew B. Blaschko EMAIL Center for Processing Speech and Images KU Leuven
Pseudocode Yes Algorithm 1 INR-Bayes: Joint reconstruction of INR using Bayesian framework
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology or a direct link to a code repository.
Open Datasets Yes Our study utilizes four CT datasets: Walnut CT with walnut scans (Der Sarkissian et al., 2019), Aluminum CT of aluminum alloy at different fatigue-corrosion phases from Tomobank (De Carlo et al., 2018), Lung CT from the Medical Segmentation Decathlon (Antonelli et al., 2022) and 4DCT on the lung area (Castillo et al., 2009). Additionally, we include a natural image dataset Celeb A (Liu et al., 2015) to evaluate the generalizability to broader applications.
Dataset Splits No The paper describes experimental configurations like 'Intra-object: 10 equidistant slices from an object’s center', 'Inter-object: 10 slices from different objects', and '4DCT: 10 temporal phases from one 4DCT slice'. It also mentions selecting '5 consecutive slices from new objects' for unseen data evaluation. These describe how subsets are chosen for different experiments but do not specify comprehensive training/validation/test splits for entire datasets.
Hardware Specification Yes The experiment setting is aligned with the inter-object configuration on Walnut CT dataset in Table 1. These assessments are performed under the same conditions with a 40GB A100 GPU, to ensure consistency in our evaluation.
Software Dependencies No The paper mentions several software tools and libraries such as the Tomosipo package, Astra-Toolbox, and Adam optimizer, but does not provide specific version numbers for these components. For example, it refers to 'Tomosipo package (Hendriksen et al., 2021)' and 'Astra-Toolbox (Van Aarle et al., 2016)', which are citations rather than version specifications.
Experiment Setup Yes All INR-based methods undergo 30K iterations, with MAML using the first 10K and Fed Avg the first 20K for meta initialization, then proceeding to adaptation. For classical methods, FBP_CUDA computes the FBP reconstruction while SIRT_CUDA from the Astra-Toolbox is used for SIRT, set to run for 5,000 iterations. ... Our INR-Bayes undergoes 300 EM loops. For each loop, the E-step iterates 100 times to update the posterior approximation. All INR-based models utilize the Adam optimizer Kingma & Ba (2014) with the first moment 0.9 and the second moment 0.999. The learning rate is set to 1e-5. For our method, the additional hyperparameter β for the KL divergence term is determined as 1e-14 for Walnut CT and 1e-16 for Lung CT, 4DCT.