Neighbor Does Matter: Density-Aware Contrastive Learning for Medical Semi-supervised Segmentation

Authors: Feilong Tang, Zhongxing Xu, Ming Hu, Wenxue Li, Peng Xia, Yiheng Zhong, Hanjun Wu, Jionglong Su, Zongyuan Ge

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the Multi-Organ Segmentation Challenge dataset demonstrate that our proposed method outperforms state-of-the-art methods, highlighting its efficacy in medical image segmentation tasks. We evaluate our proposed method on Automatic Cardiac Diagnosis Challenge (ACDC) (Bernard et al. 2018) and the Synapse multi-organ segmentation (Landman et al. 2015) under various semi-supervised settings, where our method achieves state-of-the-art performances. Experimental results on benchmark datasets demonstrate that our method significant improves upon the efficacy of previous state-of-the-art methods.
Researcher Affiliation Academia 1AIM Lab, Faculty of IT, Monash University 2Xi an Jiaotong-Liverpool University 3UNC-Chapel Hill
Pseudocode No The paper describes the methodology using textual explanations, equations (e.g., Eq. 1, 2, 3, 4, 5, 9, 10), and figures (e.g., Figure 2: Overview of the proposed unified learning framework), but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. The phrase "More details are in the appendix" does not specify if code is included there.
Open Datasets Yes We evaluate our proposed method on four public datasets with different imaging modalities, the Automatic Cardiac Diagnosis Challenge dataset (ACDC) (Bernard et al. 2018) and Synapse Dataset (Landman et al. 2015).
Dataset Splits Yes Table 1: Comparison with state-of-the-art methods on the ACDC dataset (with 5% and 10% label data) and Synapse dataset (with 10% and 20% label data). Metrics reported the mean standard results with three random seeds. Scans used Metrics Labeled Unlabeled ACDC database: 3(5%) 67(95%) 7(10%) 63(90%) Synapse dataset: 2(10%) 18(90%) 4(20%) 16(80%)
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU specifications, or memory amounts used for running the experiments. It only mentions general training parameters.
Software Dependencies No The paper mentions using the SGD optimizer but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes All models are trained with the SGD optimizer, where the initial learning rate is 0.01, momentum is 0.9 and weight decay is 10^-4. The network converges after 30,000 iterations of training. An exception is made for the first 1,000 iterations, where λcross and λCL are set to 1 and 0, respectively, which prevents model collapse caused by the initialized prototypes. Empirically, the hyperparameter Nq (the number of anchors per class in each mini-batch) is set to 256. For each anchor, the number of positive keys N+p and negative keys Np are both set to 512. The temperature coefficient τ in Eq. 10 is set to 0.4.