Multimodal Cancer Survival Analysis via Hypergraph Learning with Cross-Modality Rebalance

Authors: Mingcheng Qu, Guang Yang, Donglin Di, Tonghua Su, Yue Gao, Yang Song, Lei Fan

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative and qualitative experiments are conducted on five TCGA datasets, demonstrating that our model outperforms advanced methods by over 3.4% in C-Index performance.
Researcher Affiliation Academia 1Faculty of Computing, Harbin Institute of Technology 2School of Software, Tsinghua University 3School of Computer Science and Engineering, UNSW Sydney EMAIL
Pseudocode No The paper describes methods in text and equations but does not contain a clearly labeled pseudocode block or algorithm.
Open Source Code Yes Code: https: //github.com/MCPathology/MRe Path.
Open Datasets Yes We followed previous studies [Jaume et al., 2024; Zhang et al., 2024] and selected five datasets from The Cancer Genome Atlas (TCGA) to evaluate the performance of our model. The datasets include: Bladder Urothelial Carcinoma (BLCA) (n=384), Breast Invasive Carcinoma (BRCA) (n=968), Colon and Rectum Adenocarcinoma (COREAD) (n=298), Head and Neck Squamous Cell Carcinoma (HNSC) (n=392), and Stomach Adenocarcinoma (STAD) (n=317).
Dataset Splits Yes For each cancer type, we conducted 5-fold cross-validation, splitting the data into training and validation sets with a 4:1 ratio.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions a "pretrained encoder model (e.g., Res Net50)" and the "Adam optimizer" but does not specify software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup Yes To ensure a fair comparison, we adopted similar settings as previous studies [Chen et al., 2021b; Jaume et al., 2024; Zhang et al., 2024], using identical dataset splits and employing the Adam optimizer with a learning rate of 1 × 10−4, a weight decay of 1 × 10−5, and 30 training epochs.