UN-DETR: Promoting Objectness Learning via Joint Supervision for Unknown Object Detection
Authors: HaoMiao Liu, Hao Xu, Chuhuai Yue, Bo Ma
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our UN-DETR is comprehensively evaluated on multiple UOD and known detection benchmarks, demonstrating its effectiveness and achieving state-of-the-art performance. Experiment Following the UOD Benchmark, we utilize COCO-OOD, COCO-Mixed (Liang et al. 2023), and VOC (Everingham et al. 2010) as test sets and employ m AP, U-AP, U-F1, U-PRE, and U-REC as evaluation metrics, as detailed in the Appendix. Tables 1 and 2 present the results of our method UN-DETR, alongside 8 classic or recent stateof-the-art methods, on the UOD Benchmark. To examine the contribution of each component in our method, we conduct adequate ablation experiments as presented in Table 3. |
| Researcher Affiliation | Academia | Haomiao Liu*, Hao Xu*, Chuhuai Yue*, Bo Ma , Beijing Institute of Technology EMAIL |
| Pseudocode | No | The paper describes methods and processes in narrative text and figures, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/ndwxhmzz/UN-DETR |
| Open Datasets | Yes | Experiment Following the UOD Benchmark, we utilize COCO-OOD, COCO-Mixed (Liang et al. 2023), and VOC (Everingham et al. 2010) as test sets and employ m AP, U-AP, U-F1, U-PRE, and U-REC as evaluation metrics |
| Dataset Splits | No | The paper uses established benchmark datasets and refers to "test sets" (COCO-OOD, COCO-Mixed, VOC) and "training set" (VOC training set for pretraining), implying standard splits. However, it does not provide explicit percentages, sample counts, or a detailed methodology for splitting beyond stating which datasets are used for training and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers (e.g., PyTorch 1.9, CUDA 11.1). It mentions using ResNet50 as a backbone, which is a model architecture, not a software dependency with a specific version. |
| Experiment Setup | Yes | The weight parameters α and β are empirically set to 0.6 and 0.4, respectively. In Eq. 4, C is set to 0.5 and τ is set to 0.6. ... λ1, λ2, and λ3 are the weights of the loss, set to 3, 2, and 5, respectively. |