RankSEG: A Consistent Ranking-based Framework for Segmentation
Authors: Ben Dai, Chunlin Li
JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we establish a theoretical foundation of segmentation with respect to the Dice/Io U metrics, including the Bayes rule and Dice-/Io U-calibration, analogous to classification-calibration or Fisher consistency in classification. ... The numerical effectiveness of Rank Dice/m Rank Dice is demonstrated in various simulated examples and Fine-annotated City Scapes, Pascal VOC and Kvasir-SEG datasets with state-of-the-art deep learning architectures. |
| Researcher Affiliation | Academia | Ben Dai EMAIL Department of Statistics The Chinese University of Hong Kong Hong Kong SAR. Chunlin Li EMAIL School of Statistics University of Minnesota MN 55455 USA. |
| Pseudocode | Yes | Algorithm 1: Computing schemes for the proposed Rank Dice framework. ... Algorithm 2: m Rank Dice for overlapping m Dice-segmentation. |
| Open Source Code | Yes | Python module and source code are available on GITHUB at https://github.com/statmlben/rankseg. |
| Open Datasets | Yes | The numerical effectiveness of Rank Dice/m Rank Dice is demonstrated in various simulated examples and Fine-annotated City Scapes, Pascal VOC and Kvasir-SEG datasets with state-of-the-art deep learning architectures. |
| Dataset Splits | Yes | Pascal VOC 2012 dataset contains 1,464 training and 1,449 validation pixel-level annotated images. |
| Hardware Specification | Yes | All experiments are conducted using Py Torch and CUDA on an NVIDIA Ge Force RTX 3080 GPU. |
| Software Dependencies | No | All experiments are conducted using Py Torch and CUDA on an NVIDIA Ge Force RTX 3080 GPU. ... The experiment protocol of our numerical sections basically follows a well-developed Github repository PYTORCH-SEGMENTATION (Ouali, 2022). |
| Experiment Setup | Yes | For all methods, we employ SGD on the learning rate (lr) schedule lr schedule= poly , and the initial learning rate initial lr=0.01, weight decay=100, momentum=0.9, crop size 512x512, batch size 6, and 300 epochs. The performance on validation set is measured in terms of the m Dice and m Io U averaged across 19 object classes (Table 2). |