Gradient Alignment Improves Test-Time Adaptation for Medical Image Segmentation

Authors: Ziyang Chen, Yiwen Ye, Yongsheng Pan, Yong Xia

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments establish the effectiveness of the proposed gradient alignment and dynamic learning rate and substantiate the superiority of our Gra Ta method over other state-of-the-art TTA methods on a benchmark medical image segmentation task. ... We evaluate our proposed Gra Ta and other state-of-the-art TTA methods on the joint optic disc (OD) and cup (OC) segmentation task, which comprises five public datasets collected from different medical centres ... We utilize the Dice score metric (DSC) for evaluation.
Researcher Affiliation Academia 1 National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi an, China 2 Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen, China 3 Ningbo Institute of Northwestern Polytechnical University, Ningbo, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1: The Algorithm of Gra Ta.
Open Source Code Yes Code https://github.com/Chen-Ziyang/Gra Ta
Open Datasets Yes We evaluate our proposed Gra Ta and other state-of-the-art TTA methods on the joint optic disc (OD) and cup (OC) segmentation task, which comprises five public datasets collected from different medical centres, denoted as domain A (RIM-ONE-r3 (Fumero et al. 2011)), B (REFUGE (Orlando et al. 2020)), C (ORIGA (Zhang et al. 2010)), D (REFUGE-Validation/Test (Orlando et al. 2020)), and E (Drishti-GS (Sivaswamy et al. 2014)).
Dataset Splits No The paper describes using entire domains as source or target for training and testing, but does not provide specific percentages or counts for training, validation, and test splits within each dataset or domain. For example, it states: "We trained a Res UNet-34 (...) as the baseline individually on each domain (source domain) and subsequently tested it on each remaining domain (target domain)."
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions the use of the Adam optimizer and Res UNet-34 backbone, but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes For a fair comparison, we conducted single-iteration adaptation for each batch of test data using a batch size of 1 across all experiments following (Yang et al. 2022). ... The scaling factor β is set to 0.0001 empirically.