Concept-Based Unsupervised Domain Adaptation

Authors: Xinyue Xu, Yueying Hu, Hui Tang, Yi Qin, Lu Mi, Hao Wang, Xiaomeng Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our approach significantly outperforms the state-of-the-art CBM and DA methods on real-world datasets. Section 5. Experiments.
Researcher Affiliation Academia 1The Hong Kong University of Science and Technology 2Georgia Institute of Technology 3Rutgers University.
Pseudocode Yes Algorithm 1 Pseudocode of CUDA Training
Open Source Code Yes Our code will be available at https://github.com/xmed-lab/CUDA.
Open Datasets Yes We evaluate CUDA across eight real-world datasets. The original Waterbirds dataset (Sagawa et al., 2019)... CUB dataset (Wah et al., 2011)... MNIST (Le Cun et al., 1998), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), and USPS (Hull, 1994)... Skin CON (Daneshjou et al., 2022b)... Fitzpatrick 17k (Groh et al., 2021) and Diverse Dermatology Images (DDI) (Daneshjou et al., 2022a). Both datasets are publicly available for scientific, non-commercial use.
Dataset Splits No The paper describes how various source and target domains are constructed (e.g., Waterbirds-shift, CUB training data as source, Waterbirds-shift as target). However, it does not provide explicit training, validation, or test splits (e.g., percentages or exact counts) within these defined domains for the purpose of reproducing the experimental training process.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using models like Res Net-50, Res Net-18, CLIP:Vi T-L-14, and CLIP:RN50, but does not specify any software libraries or frameworks with their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The hyperparameters are summarized in Table 5. Table 5. Hyper-parameters of CUDA during training. Leaning Rate Weight Decay λc λd Relax Threshold Waterbirds-2 1e-3 4e-5 5 0.3 0.5 Waterbirds-200/CUB 1e-3 4e-5 5 0.3 0.7 MNIST MNIST-M/USPS 1e-3 1e-5 5 0.1 0.6 SVHN MNIST 1e-3 1e-5 5 0.1 0.7 I-II III-IV 1e-3 4e-5 10 0.1 0.3 III-IV V-VI/I-II 1e-3 4e-5 10 0.1 0.7