Universal Domain Adaptive Object Detection via Dual Probabilistic Alignment
Authors: Yuanfan Zheng, Jinlin Wu, Wuyang Li, Zhen Chen
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our DPA outperforms state-of-the-art Uni DAOD and DAOD methods across various datasets and scenarios, including open, partial, and closed sets. |
| Researcher Affiliation | Academia | Yuanfan Zheng1,2*, Jinlin Wu1,2*, Wuyang Li3, Zhen Chen1 1CAIR, HKISI-CAS 2MAIS, Institute of Automation, Chinese Academy of Sciences 3The Chinese University of Hong Kong EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology in prose and mathematical formulations but does not contain a clearly labeled pseudocode block or algorithm section. |
| Open Source Code | Yes | Code https://github.com/zyfone/DPA |
| Open Datasets | Yes | We evaluate our DPA framework on five datasets across three domain adaptation scenarios (open-set, partial-set, and closed-set): Foggy Cityscapes (Sakaridis, Dai, and Van Gool 2018), Cityscapes (Cordts et al. 2016), Pascal VOC (Everingham et al. 2010), Clipart1k (Inoue et al. 2018), and Watercolor (Inoue et al. 2018). |
| Dataset Splits | No | The paper states: "We conduct extensive experiments following the setting (Shi et al. 2022) for three benchmarks: open-set, partial-set, and closed-set." While it references a general experimental setting, it does not explicitly detail the training, validation, or test dataset splits (e.g., percentages, sample counts) for each dataset within the paper itself. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions optimizers like SGD and Adam but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The DPA model optimized training iterations are 100k, with an initial learning rate of 1e-3 and a subsequent decay of the learning rate to 1e-4 following 50k iterations. The LDPA is optimized using the SGD optimizer. The bound loss Lbound is optimized using the Adam optimizer with a learning rate set to 0.1. The hyperparameter of α is 0 for the initial epoch and 0.1 thereafter. |