Wavelet Multi-scale Region-Enhanced Network for Medical Image Segmentation

Authors: Hang Lu, Liang Du, Peng Zhou

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several benchmark datasets show that our method outperforms state-of-the-art medical image segmentation methods, demonstrating its effectiveness and superiority. The source code is publicly available at https: //github.com/C101812/WMREN/tree/master.
Researcher Affiliation Academia Hang Lu1 , Liang Du2 , Peng Zhou1 1Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, School of Computer Science and Technology, Anhui University 2 School of Computer and Information Technology, Shanxi University EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods like WRDM, SAFM, and CREM using text and mathematical equations, but it does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes The source code is publicly available at https: //github.com/C101812/WMREN/tree/master.
Open Datasets Yes We evaluate our method on three benchmark datasets: Synapse1), ACDC2, and ISIC17 [Codella et al., 2018]. 1https://www.synapse.org#!Synapse:syn3193805/wiki/217789 2https://www.creatis.insa-lyon.fr/Challenge/acdc/
Dataset Splits No On all data sets, we follow the default protocol for splitting the dataset into training, validation, and testing sets. Images from Synapse and ACDC datasets are reshaped to 224x224 pixels. In the ISIC17 dataset, images are resized to 256x256.
Hardware Specification Yes All experiments are implemented using Py Torch on Windows 10 system and Nvidia Ge Force RTX 4090 GPU.
Software Dependencies No All experiments are implemented using Py Torch on Windows 10 system and Nvidia Ge Force RTX 4090 GPU.
Experiment Setup Yes The model is trained via SGD optimizer with a momentum of 0.90, weight decay of 0.0001, and an initial learning rate of 0.05 and following a polynomial decay policy. It is trained for 400 epochs with a batch size of 24.