Toward Improving Robustness and Accuracy in Unsupervised Domain Adaptation

Authors: Aishwarya Soni, Tanima Dutta

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on benchmark datasets, including Office Home, PACS, and Vis DA, demonstrating significant improvements in both robustness and accuracy. Our method achieves an average accuracy improvement of 6% and 8.1% and an average robustness improvement of 10.2% and 4.9%, compared to state-of-the-art methods on the PACS and Vis DA datasets.
Researcher Affiliation Academia Department of Computer Science and Engineering, Indian Institute of Technology (BHU) Varanas, India EMAIL
Pseudocode Yes The algorithm of our CAM+SPLR is summarized in the supplementary of Algorithm 1.
Open Source Code No The paper does not provide a direct link to source code, nor does it explicitly state that the code for the described methodology is being released or available in supplementary materials. It mentions using 'Transfer-Learning-library (TLL)' but this refers to a third-party tool, not their own implementation code.
Open Datasets Yes We evaluate our method on the three multi-domain datasets: Office Home (Wang et al. 2021) which has four domains across 65 categories i.e., Art (Ar, 2427 images), Clip Art (Cl, 4365 images), Product(Pr, 4439 images) and Real World (Re, 4357 images), PACS (Li et al. 2017) has four domains with seven categories, namely Photo (Ph, 1670 images), Art Painting (Ar, 2048 images), Cartoon (Ca, 2344 images) and Sketch (Sk, 3929 images), and Vis DA (Peng et al. 2017) is a large dataset having two domains with 12 categories namely, Synthetic images (Syn, 152409) and Real images (Re, 55400).
Dataset Splits No The paper mentions using 'source and target data' and evaluates 'on the target data' and refers to different domain pairs (e.g., 'Synthetic Real', 'Clipart Real World', 'Photo Art') and 'standard accuracy and robustness computed on 20-step PGD'. However, it does not explicitly provide specific train/validation/test split percentages, sample counts for each split, or detailed methodology for how these splits were performed beyond domain separation.
Hardware Specification No The paper mentions using 'Res Net-50 (He et al. 2016) (pre-trained using Image Net) as the backbone feature extractor' for the experiments, but it does not specify any hardware details like GPU models, CPU types, or memory used to run these experiments.
Software Dependencies No During pre-training of DANN, we use Transfer-Learning-library (TLL) (Junguang Jiang 2020) to set up the experimental environment for UDA and follow the training hyperparameters as in (Wang et al. 2024). The training batch size is set to 32 with Adam optimizer and a learning rate of 0.001. While 'Transfer-Learning-library' and 'Adam optimizer' are mentioned, specific version numbers for these or other key software components are not provided.
Experiment Setup Yes The training batch size is set to 32 with Adam optimizer and a learning rate of 0.001. We train the CAM+SPLR to 25K iteration using two stochastic gradient descent steps in every training epoch. We consider adversarial perturbation under L norm, with adversarial examples generated using 10 steps of PGD with ϵ = 2/255 and ϵ = 8/255. Finally, we evaluate all the methods on the target data using standard accuracy and robustness computed on 20-step PGD (PGD 20) attack with ϵ = 2/255 on all the datasets except 8/255 on the Vis DA dataset and PACs dataset.