Efficient Neuron Segmentation in Electron Microscopy by Affinity-Guided Queries
Authors: Hang Chen, Chufeng Tang, Xiao Li, Xiaolin Hu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets demonstrated that our method achieved better results over state-of-the-art methods with a 2 3 speedup in inference. Code is available at https://github.com/chenhang98/AGQ. [...] We conducted experiments on benchmark datasets AC3/AC4 (Kasthuri et al., 2015) and ZEBRAFINCH (Kornfeld et al., 2017). The results demonstrated that our method achieved superior results in terms of both accuracy and efficiency. |
| Researcher Affiliation | Academia | Hang Chen1, Chufeng Tang1, Xiao Li1, Xiaolin Hu1,2,3 1. Department of Computer Science and Technology, Institute for AI, BNRist, Tsinghua University, Beijing 100084, China 2. Tsinghua Laboratory of Brain and Intelligence (THBI), IDG/Mc Govern Institute for Brain Research, Tsinghua University, Beijing 100084, China 3. Chinese Institute for Brain Research (CIBR), Beijing 100010, China EMAIL; EMAIL EMAIL |
| Pseudocode | No | The paper describes methods using text, equations, and diagrams (e.g., Figure 2, Figure 3, Figure 4) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Experiments on benchmark datasets demonstrated that our method achieved better results over state-of-the-art methods with a 2 3 speedup in inference. Code is available at https://github.com/chenhang98/AGQ. |
| Open Datasets | Yes | We conducted experiments on benchmark datasets AC3/AC4 (Kasthuri et al., 2015) and ZEBRAFINCH (Kornfeld et al., 2017). [...] To validate the effectiveness of our method, we conducted experiments on the benchmark datasets AC3/AC4 (Kasthuri et al., 2015) and ZEBRAFINCH (Kornfeld et al., 2017). |
| Dataset Splits | Yes | For AC3/AC4 dataset, following previous work (Huang et al., 2022b; Luo et al., 2024; Arganda Carreras et al., 2015), we utilized the top 80 sections of AC4 as the training set, the subsequent 20 sections as the validation set, and the top 100 sections of AC3 as the test set for the benchmark. [...] The ZEBRAFINCH dataset contains 33 volumes (approximately 150 150 150), of which we used 30 volumes as a training set and 3 volumes as a test set. |
| Hardware Specification | Yes | Training was performed on 8 NVIDIA 3090 GPUs and costed about 40 hours. [...] Inference times were tested with an NVIDIA 3090 GPU and 64 Intel Xeon Gold CPUs, which represent the time (in seconds) to process the full test set. |
| Software Dependencies | No | Our code was built on the pytorch connectomics (Lin et al., 2021) codebase. The paper mentions this specific codebase but does not provide version numbers for core software components like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | By default, we employed a 3D neuron decoder with two stages (i.e., K = 2 in Figure 4), comprising 100 learnable queries and a maximum of 100 affinity-guided queries. For feature extraction, we adopted a 3D U-Net with Res Block (He et al., 2016) (i.e., Res UNet (Xiao et al., 2018)). We used Adam optimizer and trained 20k iterations by default. [...] The learning rate followed the cosine schedule with a base learning rate of 0.0001. The total batch size is 8 (i.e., one volume image block per GPU). [...] the loss weights are specified as λDICE = 3 and λCE = 0.3. [...] λfeature = 0.1 represents the loss weight, and τ = 0.3 denotes the temperature. [...] the loss weight λaffinity = 1. Additionally, We adopt label smoothing technique (Szegedy et al., 2016) with ϵ = 10 5. |