A Methodological Framework for Measuring Spatial Labeling Similarity
Authors: Yihang Du, Jiaying Hu, Suyang Hou, Yueyang Ding, Xiaobo Sun
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through a series of carefully designed experimental cases involving both simulated and real ST data, we demonstrate that SLAM provides a comprehensive and accurate reflection of labeling quality compared to other well-established evaluation metrics. Our code is available at https://github.com/Yih Du/ SLAM. We design seven experimental cases in which a variety of spatial labeling results of simulated and real ST datasets are evaluated using SLAM and fourteen benchmark metrics. |
| Researcher Affiliation | Academia | 1School of Statistics and Mathematics, Zhongnan University of Economics and Law 2Department of Biomedical Engineering, Southern University of Science and Technology 3School of Information Engineering, Zhongnan University of Economics and Law 4School of Life Science, Hangzhou Institute for Advanced Study, University of Chinese Academy of science |
| Pseudocode | No | The paper describes a methodological framework with a workflow comprising four steps (Figure 2) and defines functions M, G, T, and D mathematically. However, it does not present these steps or functions in a structured pseudocode or algorithm block format. |
| Open Source Code | Yes | Our code is available at https://github.com/Yih Du/ SLAM. |
| Open Datasets | Yes | To increase the reality of our simulated data, we use a human breast cancer 10x Visium dataset (10x-h BC-H) to match real spots to graph nodes and use real spatial gene expression to compute node similarity. We evaluate SLAM s effectiveness in assessing spatial labeling results using a real human breast cancer dataset (slice A1, 10x-h BC-A1)[Andersson et al., 2020]. |
| Dataset Splits | Yes | We simulate a dataset of 36 type A spots. We simulate a dataset of 30 spots, with 15 circles and 15 squares representing tumor and normal spots, respectively. The ground truth labeling comprises an equal number (10) of randomly selected spots from the adipose, breast gland, and cancer regions in the 10x-h BC-H dataset. We evaluate SLAM s effectiveness in assessing spatial labeling results using a real human breast cancer dataset (slice A1, 10x-h BC-A1)[Andersson et al., 2020]. Since most spatial labeling methods for ST are unsupervised, we selected three well-established spatial clustering methods Spa GCN, Graph ST , and STAGATE to generate spatial labeling results. SLAM evaluates these labeling results by measuring their similarity to the expert-curated ground truth labels. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or specific computing environments used for running its experiments. |
| Software Dependencies | No | To simplify experimental evaluation and result interpretation, we use Network X [Hagberg et al., 2008] to simulate the graph structures of spatial labeling results and the ground truth for all simulated cases. While NetworkX is mentioned, a specific version number is not provided in the paper. |
| Experiment Setup | No | The paper mentions parameters for SLAM such as 'k' for the k-nearest neighbor graph, 'h' for the Gaussian kernel density estimator bandwidth, and 'gamma' for the exponential kernel. It states that 'h' has a sensitivity analysis in Appendix E. However, the main text does not provide concrete numerical values for these hyperparameters used in the primary experimental results. |