CDE-Learning: Camera Deviation Elimination Learning for Unsupervised Person Re-identification

Authors: Jinjia Peng, Songyu Zhang, Huibing Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrated the superior performance of the proposed CDE-Learning on benchmark datasets. Our proposed method is evaluated on reidentification benchmarks, namely Market-1501 (Zheng et al. 2015), MSMT17 (Wei et al. 2018), Person X (Sun and Zheng 2019), and CUHK03 (Li et al. 2014).
Researcher Affiliation Academia 1School of Cyber Security and Computer, Hebei University, Hebei, China Hebei Machine Vision Engineering Research Center, China 2College of Information Science and Technology, Dalian Maritime University EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: CDE-Learning Inputs: Unlabeled dataset with camera labels Parameters: Sampling parameter p and k Output: The fine-tuned encoder and checkpoints Start: Initialize Epoch parameter num epochs and iterations parameter num iters while epoch in [1, num epochs] do Extract the features F with the encoder Construct camera domain {F1, F2, F3, . . . , Fµ} Obtain camera centroids {s1, s2, s3, . . . , sµ} by Eq.1 Obtain the global centroid sg by Eq.2 Align features Fi to get refined features F i by Eq.3 Clustering F = {F 1, F 2, F 3, . . . , F µ} into m clusters with DBSCAN while iter in [1, num iters] do Sample p k queries from pseudo labeled dataset Extract the minibatch features Q through the encoder Compute loss L = Lc + Ld + Lt Back propagation Update parameters of the encoder by optimizer Update multi-prototype memory by Eq. 5 Update centroid memory by Eq. 7 end while end while
Open Source Code Yes Code https://github.com/zsszyx/CDE-Learning
Open Datasets Yes Our proposed method is evaluated on reidentification benchmarks, namely Market-1501 (Zheng et al. 2015), MSMT17 (Wei et al. 2018), Person X (Sun and Zheng 2019), and CUHK03 (Li et al. 2014).
Dataset Splits No The paper mentions several benchmark datasets like Market-1501, MSMT17, Person X, and CUHK03, but does not explicitly provide details about the specific training/test/validation splits used in their experiments. It references the datasets by their original papers without stating how they were partitioned for the current work.
Hardware Specification Yes Our method is trained on an Nvidia A4000 GPU under the Py Torch framework.
Software Dependencies No The paper mentions the 'Py Torch framework' but does not specify any version numbers for PyTorch or other software dependencies.
Experiment Setup Yes Adam optimizer is utilized with weight decay 5e-4 to train our re ID model. The initial learning rate is set to 3.5e-4 for the first ten epochs with a warm-up scheme, after which it is decreased to 1/10 of its previous value every 20 epochs for 80 epochs. Table 8 presents a detailed examination of the outcomes associated with the temperature parameter τ in our method. The results indicate that this parameter s sensitivity is crucial in distinguishing between identities. Notably, our method attains its best performance at the τ = 0.05... Table 9 reflects the effects of PK sampling on the Re ID outcomes. Our method yields the most favorable results at the parameter setting of (16, 16)...