PseDet: Revisiting the Power of Pseudo Label in Incremental Object Detection
Authors: Qiuchen Wang, Zehui Chen, Chenhongyi Yang, Jiaming Liu, Zhenyu Li, Feng Zhao
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the competitive COCO benchmarks demonstrate the effectiveness and generalization of Pse Det. Notably, it achieves 43.5+/41.2+ m AP under the 1/4-step incremental settings, achieving new state-of-the-art performance. Extensive experiments conducted on the MS COCO dataset with various incremental settings validate the effectiveness and generalization of our approach. |
| Researcher Affiliation | Academia | 1Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC 2University of Edinburgh 3Peking University 4King Abdullah University of Science and Technology |
| Pseudocode | Yes | Algorithm 1 Pseudo label selection in stage i |
| Open Source Code | Yes | Code is available at https://github.com/wang-qiuchen/Pse Det. |
| Open Datasets | Yes | Extensive experiments on the competitive COCO benchmarks demonstrate the effectiveness and generalization of Pse Det. Notably, it achieves 43.5+/41.2+ m AP under the 1/4-step incremental settings, achieving new state-of-the-art performance. MS COCO 2017 (Lin et al., 2014) is an object detection dataset with 80 categories. |
| Dataset Splits | Yes | We mainly focus on the following two scenarios: (a) One-step: 40 + 40, 50 + 30, 60 + 20, 70 + 10; (b) Multi-step: 40 + 20 2, 40 + 10 4. |
| Hardware Specification | Yes | All experiments are performed on 8 NVIDIA Tesla V100 GPUs |
| Software Dependencies | No | The paper mentions software components like GFL, Deformable DETR, and MMDetection but does not provide specific version numbers for these or other underlying software dependencies (e.g., Python, PyTorch). |
| Experiment Setup | Yes | For GFL (Deformable DETR), we set the batch size to 2 (4) per GPU, trained for 12 (50) epochs, and used SGD (Adam W) as the optimizer. |