OODML: Whole Slide Image Classification Meets Online Pseudo-Supervision and Dynamic Mutual Learning

Authors: Tingting Zheng, Kui Jiang, Hongxun Yao, Yi Xiao, Zhongyuan Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets demonstrate that our OODML surpasses the state-of-the-art by 3.3% and 6.9% on the CAMELYON16 and TCGA Lung datasets.
Researcher Affiliation Academia Tingting Zheng1, Kui Jiang1, Hongxun Yao1*, Yi Xiao2, Zhongyuan Wang2 1Harbin Institute of Technology 2Wuhan University
Pseudocode No The paper describes the methodology using architectural diagrams (Figure 2) and mathematical formulations (Equations 1-18), but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. The phrase "Results are derived from their papers, with others taken from official code implementations" refers to other research, not the authors' own code.
Open Datasets Yes To validate our OODML, we conduct extensive experiments on the CAMELYON16 (Bejnordi et al. 2017), Breast Carcinoma Subtyping (Brancati et al. 2022), TCGA Lung Cancer and TCGA Esophageal Cancer (Tomczak, Czerwi nska, and Wiznerowicz 2015) datasets
Dataset Splits Yes The CAMELYON16 official training set is randomly divided into training and validation sets at a 9 : 1 ratio. TCGA Lung and TCGA ESCA datasets are randomly split into training, validation, and testing sets with ratios of 65 : 10 : 25 and 3 : 1 : 1, respectively. For BRACS, we follow the official dataset split (Brancati et al. 2022; Zhang et al. 2025), with 537 WSIs available 395 for training, 65 for validation, and 87 for testing. We report the mean and standard deviation for at least 5 models in all experiments.
Hardware Specification Yes With the above settings, we train OODML with 200 epochs and batch size 1 on a single NVIDIA RTX 3090Ti GPU.
Software Dependencies No The paper mentions "Ada Max optimizer (Adam et al. 2014)" and various pre-trained models like "Res Net50", but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation.
Experiment Setup Yes Ada Max optimizer (Adam et al. 2014) with a weight decay of 1e 5 and the initial learning rate of 1e 4 are used. With the above settings, we train OODML with 200 epochs and batch size 1 on a single NVIDIA RTX 3090Ti GPU.