Optimizing Label Assignment for Weakly Supervised Person Search

Authors: Haiyang Zhu, Xi Yang, Nannan Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on the CUHK-SYSU and PRW datasets demonstrate that our method achieves state-of-the-art performance in weakly supervised person search.
Researcher Affiliation Academia Haiyang Zhu, Xi Yang*, Nannan Wang State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi an 710071, China EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Context-Aware Clustering (CAC)
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes CUHK-SYSU CUHK-SYSU dataset (Xiao et al. 2017) is a large-scale person search dataset... PRW PRW dataset (Zheng et al. 2017) is captured by six spatially disjoint cameras in the university.
Dataset Splits Yes CUHK-SYSU ...The training set consists of 11, 206 images with 5, 532 identities and several unlabeled ones. The testing set has 6, 978 gallery images and 2, 900 probe images. PRW ...The training set contains 5, 704frames with 482 identities, and the testing set includes 6, 112 gallery images and 2,057 queries with 450 identities.
Hardware Specification Yes All experiments is implemented on the Py Torch framework, and the network is trained on the NVIDIA RTX 4090.
Software Dependencies No All experiments is implemented on the Py Torch framework... We employ Faster-RCNN (Ren et al. 2015) released by Open MMLab (Chen et al. 2019b) as our backbone network... The paper mentions software like PyTorch, Faster-RCNN, and Open MMLab, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes The scene images are resized to 1500 900, and cropped images are rescaled to 224 96. The batched Stochastic Gradient Descent (SGD) optimizer is used with a momentum of 0.9. The weight decay factor for L2 regularization is set to 5 10 4. We use a mini-batch size of 4 for the main batch and a batch size of 16 for asynchronous data. The initial learning rate is 1 10 3. We set the adjustment factor β to 0.2 and the label assignment parameter k to 3. The model is trained for 26 epochs with the learning rate multiplied by 0.1 at 16 and 22 epochs.