Secure Domain Adaptation with Multiple Sources

Authors: Serban Stan, Mohammad Rostami

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theoretical analysis to support our approach and conduct empirical experiments to demonstrate that our algorithm is effective. We conduct extensive empirical experimental results on five standard MUDA benchmark datasets to demonstrate the effectiveness of our approach.
Researcher Affiliation Academia Serban Stan EMAIL Department of Computer Science University of Southern California; Mohammad Rostami EMAIL Department of Computer Science University of Southern California
Pseudocode Yes Algorithm 1 Secure Multi-source Unsupervised Domain Adaptation (SMUDA)
Open Source Code Yes Our code is provided as part of the supplementary material and available online at https://github.com/serbanstan/secure-muda.
Open Datasets Yes Datasets We validate on five datasets: Office-31 (Saenko et al. (2010)), Office-Home (Venkateswara et al. (2017)), Office-Caltech (Gong et al. (2012)), Image-Clef (Long et al. (2017a)) and Domain Net (Peng et al. (2019a)).
Dataset Splits No The paper mentions using specific datasets (Office-31, Office-Home, Office-Caltech, Image-Clef, Domain Net) and the concept of source and target domains, but it does not explicitly state the training, validation, and test splits for these datasets within the main text.
Hardware Specification Yes As hardware we used a and NVIDIA Titan Xp GPU.
Software Dependencies No The paper mentions using a ResNet50 network as a backbone but does not specify any software dependencies (e.g., Python, PyTorch/TensorFlow) or their version numbers.
Experiment Setup Yes We use the Adam optimizer with source learning rate of 1e 5 for each source domain for all datasets. Target learning rates are chosen between 1e 5 and 1e 7 for adaptation. The number of training iterations and adaptation iterations differs per dataset: Office-31 (12k, 48k), Domain-net (80k, 160k), Image-clef (4k, 3k), Office-home (40k, 10k), Office-Cal Tech (4k, 6k). The training batch size is either 16 or 32, with little difference observed between the two. The adaptation batch size is usually chosen around 10 the number of classes for each dataset, to ensure a good class representation when minimizing the SWD distance. The network size is the same across all datasets, with the SWD minimization space being 256 dimensional. The Res Net layers of the feature extractor are frozen during adaptation.