Dynamic Target Distribution Estimation for Source-Free Open-Set Domain Adaptation

Authors: Zhiqi Yu, Zhichao Liao, Jingjing Li, Zhi Chen, Lei Zhu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superiority of our approach is validated across multiple benchmarks. Remarkably, DTDE outperforms the best competitor by 7.6% on the Vis DA dataset.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China (UESTC) 2School of Electrical Engineering and Computer Science, University of Queensland 3School of Electronic and Information Engineering, Tongji University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and equations but does not present them in a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide a specific link to its own source code, nor does it explicitly state that its code is released or available in supplementary materials. It only refers to code from other papers: "The results of SHOT-O (Liang, Hu, and Feng 2020), GLC (Qu et al. 2023), and LEAD (based on GLC) (Qu et al. 2024) is from their released code with a batch size of 32 for fair comparison."
Open Datasets Yes Datasets. We conducted a comprehensive evaluation of DTDE on the following DA benchmarks. (1) Office31 (Saenko et al. 2010) is a popular small-scale dataset which includes 4,652 images across 31 categories of office items. (2) Office-Home (Venkateswara et al. 2017) is a widely recognized medium-scale benchmark, including 65 categories (15,500 images) across four domains: Artistic images (Ar), Clip-Art images (Cl), Product images (Pr), and Real-World images (Rw). (3) Vis DA (Peng et al. 2017) is a challenging large-scale dataset. The source domain contains 152,397 synthetic images, while the target domain consists of 55,388 real-world images.
Dataset Splits No The paper states: "The class split is the same with previous works (Saito and Saenko 2021; Qu et al. 2023). For brevity, the class split is illustrated in the form of (Y, Ys, Yt), where Y, Ys, and Yt denote the number of shared categories, source domain private categories, and target domain unknown categories, respectively." This describes the category definitions for the open-set problem but does not provide specific training, validation, or test dataset splits for the samples themselves (e.g., percentages or exact counts).
Hardware Specification Yes All experiments are conducted on an RTX-3090 GPU with Py Torch-2.1.0.
Software Dependencies Yes All experiments are conducted on an RTX-3090 GPU with Py Torch-2.1.0.
Experiment Setup Yes The batch size is set to 32 in OSDA and 64 in Uni DA to be consistent with previous works. The network is optimized using the Stochastic Gradient Descent (SGD) optimizer (Bottou 2010) with a momentum of 0.9. The learning rate is set to 1e-3 for Office-31 and Office-Home, and 1e-4 for Vis DA. Deep Embedded Validation (You et al. 2019b) is conducted for hyper-parameters selection. For constant hyper-parameters, we set ϵ to 0.875, and t to 10 in all experiments. The only hyper-parameter that needs to be adjusted in application is α, which is set to 0.3 for Office31 and Vis DA and 1.5 for Office-Home. We empirically set γ to 0.55 in all experiments which is consistent with previous literature (Qu et al. 2023, 2024).