L-Diffusion: Laplace Diffusion for Efficient Pathology Image Segmentation

Authors: Weihan Li, Linyun Zhou, Jian Yang, Shengxuming Zhang, Xiangtong Du, Xiuming Zhang, Jing Zhang, Chaoqing Xu, Mingli Song, Zunlei Feng

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental evaluations demonstrate that L-Diffusion attains improvements of up to 7.16%, 26.74%, 16.52%, and 3.55% on tissue segmentation datasets, and 20.09%, 10.67%, 14.42%, and 10.41% on cell segmentation datasets, as quantified by DICE, MPA, mIoU, and FwIoU metrics.
Researcher Affiliation Academia 1State Key Laboratory of Blockchain and Data Security, Zhejiang University 2School of Software Technology, Zhejiang University 3School of Medical Imaging, Xuzhou Medical University 4The First Affiliated Hospital, College of Medicine, Zhejiang University 5School of Computer and Computing Science, Hangzhou City University 6Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security. Correspondence to: Zunlei Feng <EMAIL>, Xiuming Zhang <xm EMAIL>.
Pseudocode No The paper includes 'A. Mathematical Derivations' which contains mathematical formulas and transformations, but it does not present any structured pseudocode or algorithm blocks with labeled steps.
Open Source Code Yes The source codes are available at https://github.com/Lweihan/LDiffusion.
Open Datasets Yes We employ six distinct tissue and cellular datasets to validate the multi-scale segmentation capabilities of L-Diffusion. These datasets encompass: a colorectal cancer histopathology dataset provided by the Guangdong Provincial People s Hospital (referred to as CRCD) (Ye et al., 2023), a melanoma histopathology dataset from the PUMA challenge (referred to as PUMA) (Schuiveling et al., 2024), a publicly accessible dataset specifically curated for tissue segmentation tasks in breast cancer pathology (referred to as BCSS) (Amgad et al., 2019), and a publicly available dataset dedicated to multi-class cellular segmentation (referred to as Pan Nuke) (Gamper et al., 2020). Furthermore, comprehensive details regarding these datasets are provided in Table 7 of Appendix (B).
Dataset Splits No The paper discusses 'Performance Across Varied Annotation Ratios' in Section 5.2 and presents Table 3 with 'Annotation Ratio 10% 20% 30% 50% 70% 100%' for an ablation study on the PUMA dataset. However, it does not provide explicit training, validation, and test splits (e.g., 80/10/10 percentages or specific sample counts) for the main experimental evaluations across all datasets used.
Hardware Specification No To Train Diffusion Model, we configure the batch size to 1, employ the Adam optimizer with a learning rate of 1 10 5, and typically set the number of sampling steps to 5 15, contingent upon the available GPU. This statement does not specify any particular GPU model or other hardware components.
Software Dependencies No The paper mentions using Conv Ne XT as the segmentation network and the Adam optimizer. However, it does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes Kn is set to 100. We adopt the Conv Ne XT (Z. Liu et al., 2022) as the segmentation network. To Train Diffusion Model, we configure the batch size to 1, employ the Adam optimizer with a learning rate of 1 10 5, and typically set the number of sampling steps to 5 15, contingent upon the available GPU. In the contrastive learning module, the temperature τ ranges from 0.05 0.1 to ensure effective sharpening of the distribution without gradient explosion. In addition, to train Conv Ne XT, we configure the batch size to 32, employ the Adam optimizer with a learning rate of 1 10 3.