Self-calibration Enhanced Whole Slide Pathology Image Analysis

Authors: Haoming Luo, Xiaotian Yu, Shengxuming Zhang, Jiabin Xia, Jian Yang, Yuning Sun, Xiuming Zhang, Jing Zhang, Zunlei Feng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results demonstrate that the proposed framework can rapidly deliver accurate and explainable results for pathological grading and prognosis tasks. Section 4 is titled 'Experiments' and details datasets, comparison with SOTA, tumor marker mining, generalization performance validation, and an ablation study.
Researcher Affiliation Collaboration The authors are affiliated with 'Zhejiang University' (academic), 'Midea Group (Shanghai) Co., Ltd.' (industry), 'Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security' (research institute), and 'The First Affiliated Hospital, College of Medicine, Zhejiang University' (academic/medical). The email 'EMAIL' further confirms academic affiliation for some authors. The presence of both university and corporate affiliations indicates a collaboration.
Pseudocode No The paper describes the methodology using descriptive text and mathematical equations in sections such as '3.1 Global Superpixel Graph Classification' and '3.2 Focus Area Prediction', but no structured pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not contain an explicit statement about releasing source code, nor does it provide any links to a code repository in the main text or supplementary information.
Open Datasets Yes The pathological datasets utilized in our experiments include PANDA [Bulten et al., 2022], CAMELYON16 [Litjens et al., 2018], BRCA [Lingle et al., 2016], and LUAD [Albertina et al., 2016].
Dataset Splits No The paper lists various datasets used (PANDA, CAMELYON16, BRCA, LUAD, HCC, GC, CRC) but does not provide specific details on how these datasets were split into training, validation, and test sets, either through percentages, explicit sample counts, or references to standard splits. While it mentions a selection of 100 CRC patient cases for a specific analysis in Section 4.3, this is not a general train/test split description.
Hardware Specification Yes All experiments run on an RTX3090 GPU.
Software Dependencies No The paper mentions using SGD [Ruder, 2016] as an optimizer and the SLIC superpixel generation technique [Achanta et al., 2010], but it does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes We employ SLIC superpixel segmentation (n = 1024), a 3-layer GCN (d = 512), layer normalization, and residual connections. The architecture includes a 12-layer transformer encoder (4 attention heads). With batch size=4 and K = 4 focused regions, the local branch achieves effective batch size=16. Training uses SGD [Ruder, 2016] (momentum=0.9, weight decay=5 10 4) with layer-specific learning rates (0.002/0.01).