Adapter Naturally Serves as Decoupler for Cross-Domain Few-Shot Semantic Segmentation

Authors: Jintao Tong, Ran Ma, Yixiong Zou, Guangyao Chen, Yuhua Li, Ruixuan Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method surpasses the state-of-the-art method in CD-FSS significantly by 2.69% and 4.68% MIo U in 1-shot and 5-shot scenarios, respectively. Section 4. Experiments
Researcher Affiliation Academia 1School of Computer Science and Technology, Huazhong University of Science and Technology 2Peking University. Correspondence to: BYixiong Zou <EMAIL>.
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulations (e.g., equations 1-9) rather than structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement offering access to source code, nor does it provide a link to a code repository.
Open Datasets Yes We employ PASCAL (Shaban et al., 2017) , which is an extended version of PASCAL VOC 2012 (Everingham et al., 2010), as our source-domain dataset for training. We regard FSS-1000 (Li et al., 2020), Deepglobe (Demir et al., 2018), ISIC2018 (Codella et al., 2019; Tschandl et al., 2018), and Chest X-ray (Candemir et al., 2013; Jaeger et al., 2013) as target domains for evaluation.
Dataset Splits Yes We employ PASCAL (Shaban et al., 2017) as our source-domain dataset for training... FSS-1000 (Li et al., 2020): We follow the official split for semantic segmentation in our experiment and present results on the designated testing set, consisting of 240 classes and 2,400 images... ISIC2018 (Codella et al., 2019; Tschandl et al., 2018): The dataset is processed and utilized in accordance with the standards set by PATNet.
Hardware Specification Yes We use a single 4090 GPU for training and testing.
Software Dependencies No The paper mentions using the Adam optimizer and models like ResNet-50 and ViT, but does not provide specific version numbers for any software libraries, programming languages, or frameworks.
Experiment Setup Yes The model is trained using the Adam (Min et al., 2021) optimizer with a learning rate of 1e-3. The hyperparameter ρ in SAM is set to 0.5... The fine-tuning of DFN is performed using the Adam optimizer, with learning rates set at 1e-3 for FSS-1000, 5e-1 for Deepglobe, 5e-3 for ISIC and Chest X-ray. Each task undergoes a total of 50 iterations... we set the spatial sizes of both support and query images to 400 400.