Domain Generalized Medical Landmark Detection via Robust Boundary-Aware Pre-Training

Authors: Haifan Gong, Yu Lu, Xiang Wan, Haofeng Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on our new domain generalization benchmark for medical landmark detection demonstrate the superiority of our approach.
Researcher Affiliation Academia 1Shenzhen Research Institute of Big Data, Shenzhen, China 2The Chinese University of Hong Kong, Shenzhen, China 3University of California, Merced, CA, USA 4Lawrence Berkeley National Laboratory, Berkeley, CA, USA EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual descriptions and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/lhaof/DGMLD
Open Datasets Yes The Atlas dataset (Gholipour et al. 2017; Wu et al. 2021; Fidon et al. 2022), used for training, includes 40 cases with segmentation masks... Additionally, the Fe TA benchmark (Payette et al. 2021, 2023), used for external out-of-domain testing... We used the ISBI-15 (Wang et al. 2016) training data... and the PKU (Zeng et al. 2021) as the out-of-domain test set.
Dataset Splits Yes We build the Domain Generalized Medical Landmarks Detection (DGMLD) benchmark, which aims to advance the field of medical imaging by focusing on the detection of anatomical landmarks across various datasets, as detailed in Table 1. The benchmark incorporates multiple datasets tailored for specific medical imaging aspects. The Atlas dataset... used for training, includes 40 cases with segmentation masks... The LFC dataset... comprises 180 annotated MR images, split into 60 validation and 120 testing cases... Additionally, the Fe TA benchmark... includes 55 testing cases without segmentation masks. The data splits are also shown in Table 1.
Hardware Specification Yes Our framework was developed using Py Torch version 2.1.2 with Python 3.9.16 and trained on an NVIDIA A100 GPU with 40 GB memory, driver version 525.85.12, and CUDA 12.0. The CPU is an AMD EPYC 7742.
Software Dependencies Yes Our framework was developed using Py Torch version 2.1.2 with Python 3.9.16 and trained on an NVIDIA A100 GPU with 40 GB memory, driver version 525.85.12, and CUDA 12.0.
Experiment Setup Yes We utilized the Adam optimizer for pre-training and fine-tuning phases, with a batch size of 4, a learning rate of 0.001, and 50 training epochs.