unMORE: Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning

Authors: Yafei Yang, Zihui Zhang, Bo Yang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that un MORE significantly outperforms all existing unsupervised methods across 6 real-world benchmark datasets, including the challenging COCO dataset, achieving state-of-the-art object segmentation results. Remarkably, our method excels in crowded images where all baselines collapse.
Researcher Affiliation Academia 1Shenzhen Research Institute, The Hong Kong Polytechnic University; 2v LAR Group, The Hong Kong Polytechnic University. . Correspondence to: Bo Yang <EMAIL>.
Pseudocode No The paper describes the multi-object reasoning module through structured steps (Step #0, Step #1, Step #2, Step #3) in Section 3.3 and Appendix A.3, but these are presented in prose without being formatted as explicit pseudocode blocks or algorithms using code-like syntax.
Open Source Code Yes Our code and data are available at https://github.com/v LAR-group/un MORE
Open Datasets Yes Datasets: Evaluation of existing unsupervised multi-object segmentation methods is primarily conducted on the challenging COCO validation set (Lin et al., 2014). However, we empirically find that a large number of objects are actually not annotated in validation set. ... We also evaluate on datasets of COCO20K (Lin et al., 2014), LVIS (Gupta et al., 2019), VOC (Everingham et al., 2010), KITTI (Geiger et al., 2012), Object365 (Shao et al., 2019), and Open Images (Kuznetsova et al., 2020).
Dataset Splits Yes Evaluation of existing unsupervised multi-object segmentation methods is primarily conducted on the challenging COCO validation set (Lin et al., 2014). ... It is denoted as COCO* validation set and will be released to the community. ... The COCO in the paper refers to the 2017 version that contains 118K training images and 5K validation images. COCO 20K is a subset of the COCO trainval2014 with 19817 images.
Hardware Specification No The paper states, 'For a fair comparison, all methods are evaluated on the same hardware configurations.' in Appendix A.13, but it does not specify any details about the CPU, GPU, or other hardware components used for the experiments.
Software Dependencies No The architecture for the Class Agnostic Detector is Cascade Mask RCNN. All experiments are performed with the Detectron2 (Wu et al., 2019) platform. The paper mentions software tools like Detectron2 but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Objectness Network Training Strategy. The object existence model is trained using the Adam optimizer for 100K iterations with a batch size of 64. The learning rate is set to be a constant 0.0001. The object center and boundary models are jointly trained using the Adam optimizer for 50K iterations with a batch size of 16. The learning rate starts at 0.0001 and is divided by 10 at 10K and 20K iterations. ... Detectors are optimized for 25K iterations using SGD optimizer with a learning rate of 0.005 and a batch size of 16. We use a weight decay of 0.00005 and 0.9 momentum. ... τ e conf = 0.5; τ c conf = 0.8; τ b conf = 0.75