Invertible Projection and Conditional Alignment for Multi-Source Blended-Target Domain Adaptation

Authors: Yuwu Lu, Haoyu Huang, Waikeung Wong, Xue Hu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results on the Image CLEF-DA, Office-Home, and Domain Net datasets validate the effectiveness of our method.
Researcher Affiliation Academia Yuwu Lu1, 2, Haoyu Huang1, Waikeung Wong2*, Xue Hu1 1South China Normal University, Guangzhou, China 2Hong Kong Polytechnic University, Hong Kong, China EMAIL, EMAIL
Pseudocode No The paper describes the method using mathematical equations and prose, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/hyhuang99/IPCA
Open Datasets Yes We evaluated and compared the state-of-the-art (SOTA) methods with our method on three popular DA datasets (i.e., Domain Net, Office-Home, and Image CLEF-DA). The Domain Net (Peng et al. 2019) is the largest dataset in DA that contains 0.6 million images from 345 categories in 6 domains: Clipart (c), Infograph (i), Painting (p), Quickdraw (q), Real-world (r), and Sketch (s). ... The Office-Home (Venkateswara et al. 2017) is a challenge dataset with label imbalance, which contains 15,500 images in total. ... The Image CLEF-DA (Caputo et al. 2014) contains a total of 2,400 images, including 12 common categories in 4 domains: Bing (B), Caltech (C), Image Net (I), and Pascal (P).
Dataset Splits No To highlight the challenge in MBDA setting, we cannot anymore use the standard protocols from the above three datasets. Thus, for SSDA setting, one column denotes one SSDA task, e.g., r c in Table 1a. For MSDA setting, two domains are selected as sources and one domain is selected as target, e.g., r+s c and r+s p in Table 1a. For MTDA/BTDA setting, one domain is selected as source and two domains are selected as targets, e.g., r c+p and s c+p in Table 1a. For MMDA/MBDA setting, two domains are sources, and the other domains are targets, e.g., r+s c+p in Table 1a.
Hardware Specification Yes all experiments on three datasets utilize the same backbone network, Res Net-50 (He et al. 2016), and run on a Nvidia Ge Force RTX-4090 GPU.
Software Dependencies Yes We implemented and evaluated our method on the Py Torch (Paszke et al. 2019) platform; the number of Py Torch is 1.13.1. ... The version of CUDA is 11.7.
Experiment Setup Yes The number of INN blocks which contains in the IPM is K = 5. ... The batch size of all experiments in the training step is set to 32. The optimizer is Stochastic Gradient Descent (SGD) with a momentum parameter of 0.9 and a weight decay of 1e-3. In addition, the learning rate is set to 1e-3 and updated by the Lambda LR (Paszke et al. 2019) during the training process.