Continuous U-Net: Faster, Greater and Noiseless

Authors: Chun-Wun Cheng, Christina Runkel, Lihao Liu, Raymond H. Chan, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate, through extensive numerical and visual results, that our model outperforms existing U-Net blocks for several medical image segmentation benchmarking datasets.
Researcher Affiliation Academia Chun-Wun Cheng EMAIL Department of Applied Mathematics and Theoretical Physics, University of Cambridge Christina Runkel EMAIL Department of Applied Mathematics and Theoretical Physics, University of Cambridge Lihao Liu EMAIL Department of Applied Mathematics and Theoretical Physics, University of Cambridge Raymond H Chan EMAIL Department of Mathematics, City University of Hong Kong Carola-Bibiane Schönlieb EMAIL Department of Applied Mathematics and Theoretical Physics, University of Cambridge Angelica I Aviles-Rivero EMAIL Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Pseudocode No The paper describes methods and proofs using mathematical equations and text, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing the source code or a direct link to a code repository for the Continuous U-Net methodology.
Open Datasets Yes We expensively evaluate our continuous U-Net using six medical imaging datasets. They are highly heterogenous covering a wide range of medical data and significantly varying in terms of image sizes, fidelity of segmentation masks and dataset sizes. An overview of the datasets used and their properties can be found in Table 2. Gla S Challenge (Sirinukunwattana et al., 2017) STARE (Hoover et al., 2000) Kvasir-SEG (Jha et al., 2020) Data Science Bowl (Caicedo et al., 2019) ISIC Challenge (Gutman et al., 2016) Breast Ultrasound Images (Al-Dhabyani et 2020)
Dataset Splits Yes An overview of the datasets used and their properties can be found in Table 2. Table 2: Characteristics of the datasets used in our experiments. Dataset # Samples # Train # Test Gla S Challenge (Sirinukunwattana et al., 2017) 165 85 80 STARE (Hoover et al., 2000) 20 16 4 Kvasir-SEG (Jha et al., 2020) 1000 800 200 Data Science Bowl (Caicedo et al., 2019) 841 707 134 ISIC Challenge (Gutman et al., 2016) 1279 900 379 Breast Ultrasound Images (Al-Dhabyani et al., 2020) 647 518 129
Hardware Specification No The paper mentions 'limited GPU memory' in a general context but does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions using 'a fourth-order Runge-Kutta (RK4) solver' and an 'Adam' optimizer, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Evaluation Protocol. Following a standard protocol in medical image segmentation, we evaluate the performance of the proposed continuous U-Net and existing techniques using three metrics: the Dice score, accuracy and averaged Hausdorff distance. For a fair comparison, we use a shared code-base for all experiments. More precisely, we set a learning rate of 1 10 3, a step-based learning rate scheduler with a step size of 1 and a gamma value of 0.999. We use a fourth-order Runge-Kutta (RK4) solver, a batch size of 16 and train all networks for 500 epochs. ... Table 10: Overview of training settings for all experiments. Parameter Value Loss function Binary Cross Entropy Loss Optimiser Adam Learning rate 10 4 Learning rate schedule Multiplication of learning rate with 0.999 every epoch Epochs 500 Batch size 16 Levels of U-Net architecture 4 Number of filters per block 3, 6, 12, 24 Tolerance (for contin. blocks only) 10 3