SCCS: Deep Neural Spectral Clustering for Self-Supervised Subcellular Structure Segmentation

Authors: Jimao Jiang, Diya Sun, Tianbing Wang, Yuru Pei

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed approach is evaluated on a publicly available volumetric electron microscopy dataset. Experiments demonstrate the effectiveness and performance gains of the proposed SCCS over the state-of-the-art in discovering a variety of subcellular structures.
Researcher Affiliation Academia 1 School of Intelligence Science and Technology, Key Laboratory of Machine Perception (MOE), State Key Laboratory of General Artificial Intelligence, Peking University, Beijing 100871, China 2 Institute of Artificial Intelligence, Peking University People s Hospital, Peking University, Beijing 100871, China EMAIL
Pseudocode No The paper describes its methodology through detailed textual explanations and diagrams, such as Figure 1: "Pipeline overview of our SCCS.", but it does not contain a specific block or section explicitly labeled as 'Pseudocode' or 'Algorithm' with structured steps.
Open Source Code No The paper does not explicitly state that the source code for the methodology described in this paper is released or provide a link to a code repository. The text mentions using a dataset from Open Organelle but not releasing their own code.
Open Datasets Yes We have evaluated the proposed SCCS on dominant subcellular structure segmentation from primary mouse pancreatic islets β cells of the Beta Seg dataset (Heinrich et al. 2021; M uller et al. 2021).
Dataset Splits Yes We use the first three cell volumes for training and the remaining volume for testing.
Hardware Specification Yes The training is performed on a PC with an NVIDIA RTX 2080Ti GPU, consuming 6 hours with 6,000 iterations.
Software Dependencies No The paper mentions using the Adam optimizer and MAE-based feature extractor but does not specify version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes We use the Adam optimizer with a momentum of 0.9 and 0.999. For training the MAE-based feature extractor, we use a learning rate of 1e-5 and a batch size of 32. We set the learning rate to 0.01 with a batch size of 1 for training the neural spectral clustering model. The MAE-based features channel number q is set to 192. κ in the RBF kernelbased affinity computation is set to 2. We retain u = 12 approximated spectral bases. We set the cluster number k to 8. Hyperparameter α in affinity matrix computation is set to 4. The hyperparameter µ in the spectral embedding loss Lspe is set to 1. The scalar threshold η and coefficient ν in Lsmo are both set to 0.1. The hyperparameters γ1 and γ2 in the loss function L are set to 1 and 0.1 to balance the criteria of spectral embedding and the regularized clustering. The LGC-based spectral embedding module consists of three linear graph convolutional layers with 384 96, 96 48, and 48 12 weight matrices. The MLP-based clustering modules consist of two fully connected layers with 12 12 and 12 8 weight matrices.