S³-Mamba: Small-Size-Sensitive Mamba for Lesion Segmentation

Authors: Gui Wang, Yuexiang Li, Wenting Chen, Meidan Ding, Wooi Ping Cheah, Rong Qu, Jianfeng Ren, Linlin Shen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three medical image segmentation datasets show the superiority of our S³-Mamba, especially in segmenting small lesions.
Researcher Affiliation Academia 1Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, Shenzhen, China 2School of Computer Science, University of Nottingham Ningbo China, Ningbo, Zhejiang, China 3Medical AI Re Search (MARS) Group, University Engineering Research Center of Digital Medicine and Healthcare, Guangxi Medical University, Nanning, Guangxi, China 4Department of Electrical Engineering, City University of Hong Kong, Kowloon, Hong Kong 5School of Computer Science, University of Nottingham, Nottingham, United Kingdom
Pseudocode No The paper describes methods verbally and uses diagrams (Figure 2) to illustrate the architecture and components, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes The ISIC2018 dataset (Azad et al. 2019) contains 2,694 dermoscopy images specifically designed for lesion segmentation. The CVC-Clinic DB dataset (Jha et al. 2019), a benchmark for colonoscopy image analysis, includes 612 high-resolution colonoscopy images with corresponding polyp annotations, focusing on polyp detection and segmentation.
Dataset Splits Yes We sort the lesion pixel distributions from smallest to largest in the ISIC2018 and CVC-Clinic DB datasets, dividing them into three groups: the smallest 30% as small lesions, 30%-60% as medium lesions, and 60% and above as large lesions. We randomly select 30% from each group to create three separate sets for model testing, each focusing on small, medium, and large lesions, respectively. The remaining 70% samples are used for model training. The Lymph dataset predominantly consists of very small and uniformly distributed objects. We utilize 70% for training and 30% for testing without further subdivision.
Hardware Specification Yes Experiments are conducted on a Tesla V100 GPU with 32GB memory.
Software Dependencies No The paper mentions 'VMamba-S pre-trained on Image Net-1k' and 'Adam W optimizer' but does not specify version numbers for any software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes The input image size is 256 256 pixels. The backbone is initialized using VMamba-S pre-trained on Image Net-1k. The Adam W optimizer is used with an initial learning rate of 0.0001 and a cosine scheduler. The batch size is 16. The maximum number of epochs is 600.