Self-Supervised Learning of Intertwined Content and Positional Features for Object Detection
Authors: Kang-Jun Liu, Masanori Suganuma, Takayuki Okatani
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method against state-of-the-art approaches, pre-training on Image Net-1K and fine-tuning on downstream tasks. Our method outperforms the state-of-the-art SSL methods on the COCO object detection benchmark, achieving significant improvements with fewer pre-training epochs. These results suggest that better integration of positional information into self-supervised learning can improve performance on the dense prediction tasks. |
| Researcher Affiliation | Academia | 1Graduate School of Information Sciences, Tohoku University, Miyagi, Japan 2RIKEN Center for AIP, Tokyo, Japan. Correspondence to: Takayuki Okatani <EMAIL>. |
| Pseudocode | No | The paper describes methods and processes using mathematical equations and textual explanations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/ KJ-rc/Intertwined SSL. |
| Open Datasets | Yes | We experimentally compare the proposed method with existing state-of-the-art approaches on the COCO detection dataset (Lin et al., 2014) in the standard setting, i.e., pretraining on Image Net-1K (Deng et al., 2009) and fine-tuning on COCO. ... We also report performance on ADE20K (Zhou et al., 2017), in line with recent SSL studies (Locatello et al., 2020; Wang et al., 2023). |
| Dataset Splits | Yes | For the COCO dataset, we follow the evaluation methodology from Drop Pos (Wang et al., 2023), using Vi TDet (Li et al., 2022b) as our detection framework while removing window attention and relative position encodings from the backbone. For the ADE20K dataset, we adhere to the evaluation protocol from LOCA (Caron et al., 2024), using the linear decoder approach from Segmenter (Strudel et al., 2021), which utilizes a minimal number of adapter layers. |
| Hardware Specification | Yes | We use the same hyperparameters for both the Vi T-B and Vi T-S backbones, except for the number of GPUs: 8 and 4, respectively. The details are provided in Table 5. ... We pre-train Vi T-S on a single node with 4 A6000 GPUs and Vi T-B on 2 nodes with the same setup. |
| Software Dependencies | No | The paper mentions several software components, frameworks, and optimizers such as "Adam W", "Vi TDet", "Segmenter", "MMSegmentation", and "xformers", but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | We pre-train our model on Image Net-1K (Deng et al., 2009) using the Adam W (Loshchilov, 2017) optimizer for 100 epochs. ... Our training setup is largely based on DINOv2 (Oquab et al., 2024), with several modifications detailed in Appendix B.1. ... Table 5. Implementation details of pre-training on Image Net-1K includes: #Epochs 100, Optimizer Adam W, Base learning rate 2e-3, Warmup (#epochs) 10, Weight decay (cosine) 0.04 to 0.4, Total batch size 512, and many other detailed hyperparameters. |