Efficient Connectivity-Preserving Instance Segmentation with Supervoxel-Based Loss Function

Authors: Anna Grim, Jayaram Chandrashekar, Uygar Sümbül

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on the following image segmentation datasets: DRIVE, ISBI12, Crack Tree and EXASPIM3. ... Table 1 shows quantitative results for the different models on the segmentation datasets. Our proposed method achieves state-of-the-art results; particularly, for the topologically relevant metrics.
Researcher Affiliation Industry Anna Grim, Jayaram Chandrashekar, Uygar S umb ul Allen Institute 615 Westlake Avenue Seattle, WA 98109 USA EMAIL
Pseudocode Yes Note that pseudocode for this method is provided in Algo. 1 and 2
Open Source Code Yes Note that pseudocode for this method is provided in Algo. 1 and 2 and our code is publicly available at https://github.com/Allen Neural Dynamics/supervoxel-loss.
Open Datasets Yes We evaluate our method on the following image segmentation datasets: DRIVE, ISBI12, Crack Tree and EXASPIM3. ... DRIVE is a retinal vessel dataset consisting of 20 images with dimensions 584x565 (Staal et al. 2004). ... EXASPIM is a 3-d light sheet microscopy dataset consisting of 37 images whose dimensions range from 256x256x256 to 1024x1024x1024 and voxel size is 1 µm3 (Glaser et al. 2024). Download at s3://aind-msma-morphology-data/EXASPIM25
Dataset Splits Yes For the 2-d datasets, we perform 3-fold cross-validation for each method and report the mean and standard deviation across the validation set. For the 3-d dataset, we evaluate the methods on a test set consisting of 4 images.
Hardware Specification No No specific hardware details (GPU models, CPU types, memory) are mentioned in the paper. It only discusses runtime, e.g., 'Runtime/Epoch' in Table 2, but not the hardware used to achieve it.
Software Dependencies No The paper does not explicitly list any software dependencies with specific version numbers. It mentions deep learning frameworks in general, but no concrete details.
Experiment Setup Yes In all experiments, we set α = 0.5 and β = 0.5 in our proposed topological loss function. ... We recommend training a baseline model with a standard loss function, then fine-tuning with the topological loss function. ... Our experiments show that α 0.5 and β 0.5 are optimal for ISBI12, whereas α 0.9 and β 0.8 are optimal for EXASPIM (Figure 7).