GraphCL: Graph-based Clustering for Semi-Supervised Medical Image Segmentation
Authors: Mengzhu Wang, Houcheng Su, Jiao Li, Chuan Li, Nan Yin, Li Shen, Jingcai Guo
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on three standard benchmarks show that the proposed Graph CL algorithm outperforms state-of-the-art semi-supervised medical image segmentation methods. The source code is available at https://github.com/ dreamkily/Graph CL |
| Researcher Affiliation | Academia | 1Hebei University of Technology 2Hong Kong University of Science and Technology 3University of Electronic Science and Technology of China 4National University of Defense Technology 5Sun Yat-Sen University 6The Hong Kong Polytechnic University. Correspondence to: Nan Yin <EMAIL>, Jingcai Guo <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using prose and mathematical equations in sections such as '2. Method', '2.1. Notations and Definitions', '2.2. Bidirectional Copy-Paste Framework', and '2.3. Structural Graph Model for Segmentation', but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/ dreamkily/Graph CL |
| Open Datasets | Yes | All experiments are performed on three public datasets with different imaging modalities and segmentation tasks: Automatic Cardiac Diagnosis Challenge dataset (ACDC) (Bernard et al., 2018), Atrial Segmentation Challenge dataset (LA) (Xiong et al., 2021) and Pancreas-NIH dataset (Roth et al., 2015). |
| Dataset Splits | Yes | Following the protocol in SS-Net (Wu et al., 2022), we conduct semi-supervised experiments with different labeled data ratios (i.e., 5% and 10%). For Pancreas-NIH dataset, we evaluate with a labeled ratio of 20% (Luo et al., 2021a; Shi et al., 2021). |
| Hardware Specification | Yes | LA Dataset experiments run on an NVIDIA A800 GPU, while Pancreas-NIH and ACDC datasets use an NVIDIA 3090 GPU. |
| Software Dependencies | No | The paper mentions the use of optimizers like SGD and Adam, and network backbones such as 3D V-Net and 2D U-Net, but does not provide specific version numbers for any software libraries or dependencies (e.g., PyTorch, TensorFlow, CUDA, Python). |
| Experiment Setup | Yes | All experiments use default settings of α = 0.5, κ = 0.01 and τ = 2, with fixed random seeds. LA Dataset experiments run on an NVIDIA A800 GPU... Training uses SGD with an initial learning rate of 0.01, decaying by 10% every 2.5K iterations. We adopt a 3D V-Net backbone, with patches cropped to 112 × 112 × 80 ... Batch size is 8, split equally between labeled and unlabeled patches, with pre-training and self-training at 5K and 15K iterations, respectively. |