ParseCaps: An Interpretable Parsing Capsule Network for Medical Image Diagnosis
Authors: Xinyu Geng, Jiaming Wang, Xiaolin Huang, Fanglin Chen, Jun Xu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three medical datasets show that Parse Caps not only outperforms other capsule network variants in classification accuracy and robustness, but also provides interpretable explanations, regardless of the availability of concept labels. |
| Researcher Affiliation | Academia | 1Harbin Institute of Technology, Shenzhen 2Shanghai Jiaotong University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: SAA Routing |
| Open Source Code | Yes | 1The code is released at https://github.com/ornamentt/Parsecaps. The supplementary material is available at https://arxiv.org/pdf/2411.01564. |
| Open Datasets | Yes | We evaluated Parse Caps with Contrast Enhanced Magnetic Resonance Images (CE-MRI) (Cheng 2017), PH2 (Mendonc a et al. 2013) and Derm7pt (D7) (Kawahara et al. 2019) datasets. |
| Dataset Splits | Yes | we split all datasets into 80% training, 10% testing, and 10% validation. |
| Hardware Specification | Yes | Parse Caps was developed in Py Torch 12.1 and Python 3.9, accelerated by eight GTX-3090 GPUs. |
| Software Dependencies | Yes | Parse Caps was developed in Py Torch 12.1 and Python 3.9, accelerated by eight GTX-3090 GPUs. |
| Experiment Setup | Yes | We set the learning rate to 2.5e-3, batch size to 64, and weight decay to 5e-4. The model was trained for 300 epochs using the Adam W optimizer and a 5-cycle linear warm-up. |