Bidirectional View based Consistency Regularization for Semi-Supervised Domain Adaptation

Authors: Yuntao Du, 娟 江, Hongtao Luo, Haiyang Yang, MingCai Chen, Chongjun Wang

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted, and the results show the effectiveness of the proposed method. We conduct extensive experiments on three common datasets.
Researcher Affiliation Academia Yuntao Du EMAIL National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University Juan Jiang EMAIL National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University Hongtao Luo EMAIL National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University Haiyang Yang EMAIL National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University Mingcai Chen EMAIL National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University Chongjun Wang EMAIL National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University
Pseudocode Yes the pseudo code of BVCR is shown in appendix. Algorithm 1 Bidirectional View based Consistency Regularization for SSDA (BVCR)
Open Source Code No Our method is implemented by Py Torch Paszke et al. (2019), with an open-source library4 Zhou et al. (2021b). The results of MCT5 is implemented by ourselves using the open-source code and the result of baselines are taken from their respective publications if the evaluation protocol is the same. The paper does not provide a direct link or explicit statement for *their own* code implementation of BVCR.
Open Datasets Yes Datasets: We evaluate the proposed method on several latest SSDA benchmarks including Office311 Saenko et al. (2010), Office-Home2 Venkateswara et al. (2017), and Domain Net3 Peng et al. (2019). 1https://www.cc.gatech.edu/~judy/domainadapt/, 2https://www.hemanthdv.org/officeHomeDataset.html, 3http://ai.bu.edu/M3SDA/
Dataset Splits Yes For the fair comparison, parts of target samples are split as valitaion set, and they are not used for training and only used for selecting the models and hyper-parameters. ... The specific number of train, validation, and test sets used in three datasets are present in Table 8 in appendix. Table 8: The number of train set, validation set, and test set when each domain is selected as target domain under 3-shot setting.
Hardware Specification Yes The experiments are conducted on Linux operating system with Tesla V100 (32G memory) and Ge Force RTX 3090 (24G memory).
Software Dependencies No Our method is implemented by Py Torch Paszke et al. (2019), with an open-source library4 Zhou et al. (2021b). While it mentions PyTorch, it does not provide a specific version number.
Experiment Setup Yes The learning rate, hyper-parameter λ, and the initial value of confidence threshold γ are searched from {1e-5, 5e-5, 1e-4, 5e-4, 5e-3}, {0.5, 1.0, 2.0}, and {0.5, 0.8, 0.85, 0.9, 0.95} on validation set, respectively. The value of the confidence threshold γ is 0.9 and we set λ as 1 for all datasets. The number of epoch ans the random seed is all set to be 100 and 123 on three datasets. We set λ as 1 for all datasets and batchsize B to be 32, 16, and 16 for Office-Home, Office31, and Domain Net, respectively. We use SGD with momentum for optimization and set the momentum as 0.95. The learning rate is set to be 5e 4 and adjusted with cosine annealing strategy.