ConDSeg: A General Medical Image Segmentation Framework via Contrast-Driven Feature Enhancement

Authors: Mengqi Lei, Haochen Wu, Xinhua Lv, Xin Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five datasets across three scenarios demonstrate the state-of-the-art performance of our method, proving its advanced nature and general applicability to various medical image segmentation scenarios.
Researcher Affiliation Collaboration Mengqi Lei1, Haochen Wu1, Xinhua Lv1, Xin Wang2 1China University of Geosciences, Wuhan 430074, China 2Baidu Inc, Beijing, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology in text and block diagrams but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code https://github.com/Mengqi-Lei/Con DSeg
Open Datasets Yes We conducted experiments on five challenging public datasets: Kvasir-SEG (Jha et al. 2020), Kvasir-Sessile (Jha et al. 2021), Gla S (Sirinukunwattana et al. 2017), ISIC2016 (Gutman et al. 2016), and ISIC-2017 (Codella et al. 2018), covering subdivision tasks across three medical image modalities.
Dataset Splits No Detailed information about the datasets is shown in the Supplementary Material.
Hardware Specification Yes All experiments were conducted on an NVIDIA Ge Force RTX 4090 GPU, with the image size adjusted to 256 256 pixels.
Software Dependencies No The paper mentions using the Adam optimizer and Res Net-50 as the default encoder, but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks.
Experiment Setup Yes The batch size was set to 4, and the Adam optimizer (Kingma and Ba 2014) was used for optimization. We use the Res Net-50 (He et al. 2016) as the default encoder... In the first stage, the learning rate is set to 1e-4. In the second stage, we load the weights of the Encoder and set its learning rate to a lower 1e-5, while for the rest of the network, the learning rate is set to 1e-4. The window size for CDFA is set to 3.