TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging
Authors: Chuang Liu, Hongyan Xu, Yichao Cao, Xiu Su, Zhe Qu, Tianfa Li, Shan An, Haogang Zhu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on large-scale benchmarks demonstrate that Tiny MIG, with extremely low computational cost, significantly outperforms stateof-the-art models, showcasing its superior SDG capabilities. Extensive experiments on public datasets demonstrate that Tiny MIG significantly outperforms existing methods with minimal model size and computational cost. To validate the effectiveness of our Tiny MIG framework, we conduct extensive experiments on two medical image DG benchmark tasks. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Complex & Critical Software Environment, Beihang University, China 2Zhongguancun Laboratory, China 3Big Data Institute, Central South University, China 4Jinan Institute of Supercomputing Technology 5School of Electrical and Information Engineering, Tianjin University, China 6Hangzhou International Innovation Institute, Beihang University, China. Correspondence to: Xiu Su <EMAIL>, Haogang Zhu <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations (e.g., Section 3.1 Global Distribution Consistency Learning, Section 3.2 Localized Representation Alignment), but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | All the code and model weights will be publicly available. |
| Open Datasets | Yes | To validate the effectiveness of our Tiny MIG framework, we conduct extensive experiments on two medical image DG benchmark tasks. These include two medical segmentation tasks: the 2D joint optic disc (OD) and cup (OC) segmentation task (Almazroa et al., 2018a; Chen et al., 2023), as well as a 3D medical image segmentation task: the prostate MRI segmentation task (Chen et al., 2023). The OD/OC segmentation task comprises five public datasets collected from different medical centres, denoted as D1 (RIM-ONE-r3 (Orlando et al., 2020)), D2 (REFUGE(Almazroa et al., 2018b)), D3 (ORIGA (Zhang et al., 2010)), D4 (REFUGE-Validation/Test (Almazroa et al., 2018b)), and D5 (Drishti-GS (Sivaswamy et al., 2014)). The Prostate segmentation task comprises 116 MRI instances from six different clinical centers, aggregated from three public datasets, including NCI-ISBI13 (Bloch et al., 2015), I2CVB (Lemaître et al., 2015), and PROMISE12 (Litjens et al., 2014) datasets. |
| Dataset Splits | Yes | In our experiments, we adopt the leave-one-domain-out strategy commonly used in DG research. The model is trained on the single source domain and tested on the remaining K 1 unseen target domains. |
| Hardware Specification | Yes | All our experiments are conducted on two 4090 GPU computing servers. |
| Software Dependencies | No | We employ the Adam W optimizer (Loshchilov & Hutter, 2018) on all three medical image segmentation tasks, with β = [0.9, 0.999]. The paper mentions using AdamW optimizer but does not specify versions for programming languages, libraries (e.g., PyTorch, TensorFlow), or other software dependencies. |
| Experiment Setup | Yes | We employ the Adam W optimizer (Loshchilov & Hutter, 2018) on all three medical image segmentation tasks, with β = [0.9, 0.999]. The initial learning rates are set as l0 = 0.0001. These rates decay according to the polynomial rule lt = l0 (1 t / T)0.9, where lt denotes the learning rate at epoch t, and T represents the total number of epochs, which are set to 200 for prostate segmentation and 100 for the joint segmentation of OD/OC and Polyp segmentation, with the batch size set as 8. Based on the experiments, we set [λ1,λ2,λ3,λ4,λ5] as [1,1,0.5,0.5,0.5] empirically. In addtion, we select the 3-6-9-12 layers in foundation models and 0-1-2-3 layers in the specialized model, i.e., IR = (3, 6, 9, 12), IS = (0, 1, 2, 3) in Eq. 12. |