Doubly Contrastive Learning for Source-Free Domain Adaptive Person Search

Authors: Yizhen Jia, Rong Quan, Yue Feng, Haiyan Chen, Jie Qin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on existing state-of-the-art person search models and two widely used benchmarks demonstrate the superiority of the proposed SFDA-PS task, as well as our proposed DCL. Extensive experimental results demonstrate the effectiveness of DCL in generalizing state-of-the-art person search models.
Researcher Affiliation Academia Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China EMAIL
Pseudocode No The paper describes methods and processes using mathematical formulations and diagrams (Figures 2 and 3) but does not include a distinct section or block labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on two general person search benchmarks, CUHK-SYSU (Xiao et al. 2017) and PRW (Zheng et al. 2017).
Dataset Splits Yes CUHK-SYSU is a large-scale person search dataset... We utilize the standard training/test set, where the training set contains 5,532 identities and 11,206 images, and the test set contains 2,900 query persons and 6,978 images. PRW... The dataset is split into a training set of 5,704 images with 482 different identities and a test set of 6112 images with 2,057 query persons.
Hardware Specification Yes We implement our model with the PyTorch library and conduct all experiments on a single NVIDIA RTX A5000 GPU.
Software Dependencies No The paper mentions using the PyTorch library for implementation but does not specify its version or other software dependencies with their respective versions.
Experiment Setup Yes In all of our experiments, the batch size is set to 1, and we adopt the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 and a weight decay of 0.0005. We optimize the model for 10 epochs, using an initial learning rate of 0.001, which is decreased by a factor of 10 at epoch 8. As for the data augmentation, we apply the random horizontal flip for weak augmentation and randomly add color jittering, grayscale, Gaussian blur, and cutout patches for strong augmentations, following UMT (Deng et al. 2021). Np is set to 300 to construct the relation graph. ϵ is set to 0.5 to generate the pair-wise relation labels in Re C. We set default hyper-parameters εh = 0.9, εl = 0.8, and α = 0.8.