NightReID: A Large-Scale Nighttime Person Re-Identification Benchmark

Authors: Yuxuan Zhao, Weijian Ruan, He Li, Mang Ye

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple nighttime Re-ID datasets demonstrate the significance of Night Re ID and validate the efficacy, flexibility, and applicability of the EDA framework.
Researcher Affiliation Collaboration 1School of Computer Science, Wuhan University 2Hangzhou Research Institute, Xidian University 3Smart City Research Institute of China Electronics Technology Group Corporation
Pseudocode No The paper describes the methodology using mathematical equations and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes Code https://github.com/msm8976/Night Re ID
Open Datasets Yes Code https://github.com/msm8976/Night Re ID
Dataset Splits Yes For dataset partitioning, we randomly divide the images of annotated 1,500 identities into training and testing sets. Specifically, the training set comprises 500 annotated identities with 15,514 images, while the complete testing set consists of 1,000 annotated identities with 25,914 images. Among the testing set, 528 identities are randomly selected as a sub-set to simulate different retrieval scenarios. Additionally, 11,811 images from 1,096 unlabeled identities serve as the distractor set. The query set consists of 10% randomly selected annotated identity images from the corresponding testing set.
Hardware Specification Yes the training process spanned 120 epochs on a single NVIDIA RTX3090 GPU.
Software Dependencies No The paper mentions using 'Trans Re ID-SSL (Luo et al. 2021) as the backbone' and the 'SGD optimizer', but does not provide specific version numbers for any software libraries, programming languages, or development environments.
Experiment Setup Yes All images were resized to 256 128 pixels, with a batch size of 64, and the training process spanned 120 epochs on a single NVIDIA RTX3090 GPU. We utilized the SGD optimizer with a momentum of 0.9. The learning rate was set to 0.0004, with a warm-up of 20 epochs followed by cosine decay.