Volumetric Axial Disentanglement Enabling Advancing in Medical Image Segmentation

Authors: Xingru Huang, Jian Huang, Yihao Guo, Tianyun Zhang, Zhao Huang, Yaqi Wang, Ruipu Tang, Guangliang Cheng, Shaowei Jiang, Zhiwen Zheng, Jin Liu, Renjie Ruan, Xiaoshuai Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Validation on various datasets demonstrates Pa R s consistent elevation of segmentation precision and boundary clarity across 11 baselines and different imaging modalities, achieving stateof-the-art performance on multiple datasets. Experimental tests demonstrate the ability of volumetric axial disentanglement to refine the segmentation of volumetric medical images.
Researcher Affiliation Academia 1Hangzhou Dianzi University, Hangzhou, China 2Northumbria University, Newcastle, UK 3Communication University of Zhejiang, Hangzhou, China 4Beijing University of Posts and Telecommunications, Beijing, China 5University of Liverpool, Liverpool, UK 6The Third Affiliated Hospital of Wenzhou Medical University, Wenzhou, China 7Ocean University of China, Qingdao, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methodology in natural language and mathematical formulations, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is released at https://github.com/IMOP-lab/Pa R-Pytorch.
Open Datasets Yes The proposed method is validated using publicly available volumetric segmentation datasets (FLARE2021, OIMHS, and Seg THOR) and compared with 11 previous state-of-the-art models to verify its effectiveness. ... we conducted experiments across three publicly available datasets: FLARE2021 [Ma et al., 2022], OIMHS [Ye et al., 2023], and Seg THOR [Lambert et al., 2020].
Dataset Splits Yes For all datasets, we employ an 8:1:1 random split for the training, validation, and testing sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments within the provided text.
Software Dependencies No The paper mentions using a loss function (LDice CE) and an optimizer (Adam W) but does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes Across all training sessions, we utilize LDice CE as the loss function and the Adam W optimizer with a learning rate of 0.0001, over 80,000 iterations, and a batch size of 2. Validation employs a sliding window approach with a 0.5 overlap.