CASUAL: Conditional Support Alignment for Domain Adaptation with Label Shift

Authors: Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results demonstrate that CASUAL outperforms other state-of-the-art methods on different UDA benchmark tasks under different label shift conditions. We provide experimental results on several benchmark tasks in UDA, which consistently demonstrate our proposed method s empirical benefits compared to other existing UDA approaches.
Researcher Affiliation Collaboration 1University of Illinois Chicago 2Vin AI Research 3Korea University 4National University of Singapore EMAIL, EMAIL EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Conditional Adversarial Support Alignment
Open Source Code No The paper does not contain an explicit statement about the release of source code nor does it provide a link to a code repository. It mentions "Further implementation details, including the hyperparameter values and network architectures, are provided in the Appendix." but this does not confirm code availability.
Open Datasets Yes We focus on visual domain adaptation tasks and empirically evaluate our proposed algorithm CASUAL on benchmark UDA datasets USPS MNIST, STL CIFAR and Vis DA-2017. We further conduct experiments on the Domain Net dataset and provide the results in Appendix due to the page limitation.
Dataset Splits Yes For Vis DA-2017 and Domain Net, instead of using the 100% unlabeled target data for both training and evaluation (Prabhu et al. 2021; Tanwisuth et al. 2021), we utilize 85% of the unlabeled target data for training, and the remaining 15% for evaluation... For each method and label shift degree, we perform 5 runs with different random seeds and report average per-class accuracy on the target domain s test set...
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments. It mentions "Further implementation details, including the hyperparameter values and network architectures, are provided in the Appendix." but this does not include hardware specifics.
Software Dependencies No The paper does not provide specific software dependencies with version numbers. It mentions "Further implementation details, including the hyperparameter values and network architectures, are provided in the Appendix." but this does not include software versions.
Experiment Setup Yes The training process of our proposed algorithm, CASUAL, can be formulated as an alternating optimization problem (see Algorithm 1), min f,c Ly(g) + λalign Lalign(f) + λce Lce(g) + λv Lv(g); min ϕ Ldis(ϕ) where λalign, λy, λce, λv are the weight hyper-parameters associated with the respective loss terms. Further implementation details, including the hyperparameter values and network architectures, are provided in the Appendix.