RUN: Reversible Unfolding Network for Concealed Object Segmentation

Authors: Chunming He, Rihan Zhang, Fengyang Xiao, Chengyu Fang, Longxiang Tang, Yulun Zhang, Linghe Kong, Deng-Ping Fan, Kai Li, Sina Farsiu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the superiority of RUN and highlight the potential of unfolding-based frameworks for COS. Code is available at https: //github.com/Chunming He/RUN. ... Experiments on five COS tasks, as well as salient object detection, validate the superiority of our RUN method.
Researcher Affiliation Collaboration 1Duke University. 2SIGS, Tsinghua University. 3Shanghai Jiao Tong University. 4Nankai Institute of Advanced Research (SHENZHEN-FUTIAN). 5Meta.
Pseudocode Yes Algorithm S1 Proposed RUN Framework.
Open Source Code Yes Code is available at https: //github.com/Chunming He/RUN.
Open Datasets Yes we perform experiments on four datasets: CHAMELEON (Skurowski et al., 2018), CAMO (Le et al., 2019), COD10K (Fan et al., 2021a), and NC4K (Lv et al., 2021). ... we utilize two benchmarks: CVCColon DB (Tajbakhsh et al., 2015) and ETIS (Silva et al., 2014). ... we evaluate our method on the DRIVE1 and CORN (Ma et al., 2021) datasets... experiments on two datasets: GDD (Mei et al., 2020) and GSD (Lin & He, 2021). ... CDS2K dataset (Fan et al., 2023a).
Dataset Splits Yes For training, we use 1,000 images from CAMO and 3,040 images from COD10K. The remaining images from these two datasets, along with all images from the other datasets, constitute the test set. ... For the DRIVE dataset, training and inference adhere to the dataset s predefined splits. For the CORN dataset, the last 70% of the data is used for training, while the first 30% serves as the test set. ... The training set consists of 2,980 images from GDD and 3,202 images from GSD, while the remaining images are reserved for inference.
Hardware Specification Yes We implement our method using Py Torch on two RTX4090 GPUs.
Software Dependencies No We implement our method using Py Torch on two RTX4090 GPUs. While PyTorch is mentioned, a specific version number is not provided, and no other software dependencies with version numbers are listed.
Experiment Setup Yes During training, we use the Adam optimizer with momentum parameters (0.9, 0.999). The batch size is set to 36, and the initial learning rate is configured to 0.0001, which is reduced by 0.1 every 80 epochs. The stage number K is set as 4.