Towards Efficient Object Re-Identification with a Novel Cloud-Edge Collaborative Framework
Authors: Chuanming Wang, Yuxin Yang, Mengshi Qi, Huanhuan Zhang, Huadong Ma
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our method obviously reduces transmission overhead and significantly improves performance. 5 Experiments |
| Researcher Affiliation | Academia | The State Key Laboratory of Networking and Switching Technology Beijing University of Posts and Telecommunications |
| Pseudocode | No | The paper describes the methodology using mathematical formulations (e.g., equations 2, 3, 4, 5, 6, 7, 8, 9, 10) and descriptive text, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about the release of source code, a link to a code repository, or information about code in supplementary materials. |
| Open Datasets | Yes | Datasets. We mainly evaluate our proposed framework and method on the Duke MTMC-re ID (Zheng, Zheng, and Yang 2017) and Market-1501 (Zheng et al. 2015) datasets, since they are annotated with high-quality timestamp. |
| Dataset Splits | No | The paper mentions using the Duke MTMC-re ID and Market-1501 datasets for evaluation but does not specify the training, testing, or validation splits used for experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper mentions using Adam as an optimizer but does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Hyper-parameters: To train the Da CM network, we employ Adam (Kingma and Ba 2015) as the optimizer. The initial learning rate is set to 0.01 and is reduced by 10 for every 30 epochs. γ0 and γ1 are both set to 0.01 as the default. α and β are bot set to 0.1. λ in Eq. (2) is set to 10,000 as the default. B is set to 3 C, i.e., each edge device can upload an average of three images at a time. |