Unsupervised Domain Adaptive Person Search via Dual Self-Calibration

Authors: Linfeng Qi, Huibing Wang, Jiqing Zhang, Jinjia Peng, Yang Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different target domain datasets validate the effectiveness of the proposed DSCA, which outperforms existing SOTA unsupervised domain adaptive methods by significant margins. Ablation experiments also evidence the importance of each key component of DSCA.
Researcher Affiliation Academia 1School of Information Science and Technology, Dalian Maritime University, Dalian, China 2School of Cyber Security and Computer, Hebei University, Baoding, China 3School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methods using text and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/whbdmu/DSCA
Open Datasets Yes We evaluate our DCSA on two person search datasets: CUHK-SYSU (Xiao et al. 2017) and PRW (Zheng et al. 2017).
Dataset Splits Yes The CUHK-SYSU dataset is a large-scale person search benchmark comprising a total of 18,184 images. The dataset contains 8,432 unique person identities and 96,143 annotated bounding boxes. It is divided into a training set with 5,532 identities and 11,206 images, and a test set with 2,900 query persons and 6,978 gallery images. The PRW dataset includes 11,816 scene images with annotations for 932 unique person identities and 43,110 bounding boxes. The training set comprises 932 identities with 5,704 images, while the test set contains 2,057 query persons and 6,112 scene images.
Hardware Specification Yes We implemented our DSCA using Pytorch and trained it on an NVIDIA A800 GPU with a batch size of 4.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number, which is required for a reproducible description of software dependencies.
Experiment Setup Yes We adopted the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.0024, which is warmed up in the first epoch. [...] We set both the momentum factor γ and smoothing factor m to 0.2 for online and offline cluster updating, respectively. [...] Specifically, when PRW is used as the target domain, our DSCA undergoes pre-training on the source domain CUHK-SYSU for 7 epochs before commencing joint training for 13 epochs. Conversely, we first pre-train on the source domain PRW for 2 epochs, followed by beginning joint training for 7 epochs.