Collaborative Semantic Consistency Alignment for Blended-Target Domain Adaptation

Authors: Yuwu Lu, Xue Hu, Waikeung Wong, Haoyu Huang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several datasets show that CSCA achieves promising classification performance. Experiments Datasets We conduct experiments on four standard DA benchmarks: Office-31 (Saenko et al. 2010), Office-Home (Venkateswara et al. 2017), Image CLEF-DA (Caputo et al. 2014), and the very large scale Domain Net (Peng et al. 2019) (0.6 million images).
Researcher Affiliation Academia 1South China Normal University, Guangzhou, China 2Hong Kong Polytechnic University, Hong Kong, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 in the Supp. Mat. summarizes the training process of CSCA.
Open Source Code Yes Code https://github.com/xuehu365/CSCA
Open Datasets Yes We conduct experiments on four standard DA benchmarks: Office-31 (Saenko et al. 2010), Office-Home (Venkateswara et al. 2017), Image CLEF-DA (Caputo et al. 2014), and the very large scale Domain Net (Peng et al. 2019) (0.6 million images).
Dataset Splits No The paper describes how domains are split to form source and blended-target tasks (e.g., "A W/D"), but does not provide specific training/test/validation splits within these domains or for the overall datasets.
Hardware Specification No The paper mentions using ResNet-50 as a backbone but does not specify any hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments in the main text.
Software Dependencies No The paper mentions using ResNet-50 and various data augmentation techniques, but it does not specify software versions for any libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We set the number of projections M = 256, as done in (Lee et al. 2019). to reduce the impact of unreliable pseudo-labels, the elements a about low-confidence pseudo-labels in A (the maximum likelihood of prediction is less than threshold τ1) are not optimized during training. τ2 is a scaling temperature. λe and λv are balance parameters. λp and λf are weighting parameters. Ltotal = Lcls + λswd Lswd + Lgraph + Lcon, where λswd is a trade-off parameter. Fig. 3 (a) shows that with balance parameters λe = 1.0 and λv = 0.1, our accuracy achieves its optimum. Fig. 3 (b) indicates that with λp = 0.2 and λf = 1.0, the model s performance reaches its peak. For the loss trade-off parameter λswd, we find that setting λswd = 1.0 yields the best results, as shown in Fig. 3 (c).