Revisiting Source-Free Domain Adaptation: a New Perspective via Uncertainty Control
Authors: Gezheng Xu, Hui GUO, Li Yi, Charles Ling, Boyu Wang, Grace Yi
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets and empirical analyses confirm our theoretical findings and the effectiveness of the proposed method. This work offers new insights into understanding and advancing SFDA performance. We release our code at https://github.com/xugezheng/UCon_SFDA. Section 5 is explicitly titled "EXPERIMENTS" and includes subsections like "EXPERIMENTAL SETUP", "OVERALL EXPERIMENTAL RESULTS", and "Ablation Study", containing tables of performance metrics and figures demonstrating experimental findings. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of Western Ontario, 2Vector Institute, 3Department of Statistical and Actuarial Sciences, University of Western Ontario (Academic affiliations); 4Tiktok (Industry affiliation). Emails: EMAIL (Academic); EMAIL (Industry); EMAIL (Academic). |
| Pseudocode | Yes | The pseudocode for the algorithm (Algorithm 1) and the training process can be found in Appendix B. Appendix B contains "Algorithm 1: UCon-SFDA Uncertainty-Controlled Source-Free Domain Adaptation" which clearly outlines the steps. |
| Open Source Code | Yes | We release our code at https://github.com/xugezheng/UCon_SFDA. |
| Open Datasets | Yes | To evaluate the proposed method, we conduct experiments on several SFDA benchmarks under three different domain shift scenarios... We test our method on the following datasets: Office-31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), Vis DA2017 (Peng et al., 2017), and Domain Net-126 (Litrico et al., 2023). We further evaluate our method on more complex SFDA tasks. For SFDA with label shift, we employ the Vis DA-RUST dataset, which presents a severe label imbalance in the target domain (Li et al., 2021). |
| Dataset Splits | Yes | For general SFDA, we test our method on the following datasets: Office-31... Office-Home... Vis DA2017... Domain Net-126... We use the synthetic images as the source domain and the real images as the target domain. For source-free partial set domain adaptation, we follow the setup in Liang et al. (2020) for the Office-Home dataset, where only the first 24 classes are retained in the target domain. |
| Hardware Specification | Yes | All experiments are run on a single 32GB V100 or 40GB A100 GPU. |
| Software Dependencies | No | The paper mentions using SGD with a momentum of 0.9 and a weight decay of 1e-3, and the Nesterov update method. However, it does not specify the versions of any programming languages or software libraries (e.g., PyTorch, TensorFlow) used for implementation. |
| Experiment Setup | Yes | For optimization, we use SGD with a momentum of 0.9 and a weight decay of 1e 3. We also use the Nesterov update method. The initial learning rate for the bottleneck and classification layers is set to 0.001 across all datasets. For the backbone models, the initial learning rates are set as follows: 5e 4 for Office-Home, 1e 4 for Domain Net-126 and Office-31, and 5e 5 for Vis DA2017. We use the same learning rate scheduler as Liang et al. (2020) for the Office-Home and Domain Net-126 datasets. The batch size is 64 for all datasets. We train for 30 epochs on Vis DA2017 and 45 epochs on Office-Home, Office-31, and Domain Net-126. |