M2OST: Many-to-one Regression for Predicting Spatial Transcriptomics from Digital Pathology Images

Authors: Hongyi Wang, Xiuju Du, Jing Liu, Shuyi Ouyang, Yen-Wei Chen, Lanfen Lin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have tested M2OST on three public ST datasets and the experimental results show that M2OST can achieve state-of-the-art performance with fewer parameters and floating-point operations (FLOPs).
Researcher Affiliation Academia 1Zhejiang University, Hangzhou, China 2Zhejiang Lab, Hangzhou, China 3Ritsumeikan University, Osaka, Japan EMAIL, EMAIL
Pseudocode No The paper describes the methodology and architecture of M2OST in detail with text and figures (Figure 2, 3, 4, 5) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' sections, nor does it present structured code-like blocks.
Open Source Code Yes Code https://github.com/Dootmaan/M2OST
Open Datasets Yes In our experiments, we utilized three public datasets to evaluate the performance of the proposed M2OST model. The first one is the human breast cancer (HBC) dataset (Stenbeck et al. 2021). This dataset contains 30,612 spots in 68 WSIs... The second dataset is the human HER2-positive breast tumor dataset (Andersson et al. 2021). This dataset consists of 36 pathology images and 13,594 spots... The third dataset is the human cutaneous squamous cell carcinoma (c SCC) dataset (Ji et al. 2020), which includes 12 WSIs and 8,671 spots.
Dataset Splits Yes In each dataset, 60% of the WSIs and their corresponding ST maps are used for training, 10% for validation, and the remaining 30% for testing.
Hardware Specification Yes All the methods are trained on two Nvidia RTX A6000 (48G) GPUs.
Software Dependencies No The paper mentions using 'Adam (Kingma and Ba 2015) optimizer' but does not specify versions for any other software dependencies or libraries.
Experiment Setup Yes All the methods are trained with Adam (Kingma and Ba 2015) optimizer with a learning rate of 1e-4 for 100 epochs. Batch size is 96 for patch-level methods and 1 for slide-level methods. The hyper-parameters of M2OST are the model width, model depth, and the number of heads in self-attention. ... the M2OST Encoder is repeated 4 times (i.e, model depth), the embedding channel is 192 (i.e., model width), and the number of head for the self-attention operation in ITMM is set to be 3.