Pixel-wise Divide and Conquer for Federated Vessel Segmentation

Authors: Tian Chen, Wenke Huang, Zhihao Wang, Zekun Shi, He Li, Wenhui Dong, Mang Ye, Bo Du, Yongchao Xu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments confirm the effectiveness of our method. 4 Experiments 4.1 Experimental Setup Datasets and Evaluation Metric. We evaluate FVAC on both color fundus and OCTA retinal datasets. The fundus data include DRIVE [Staal et al., 2004], STARE [Hoover et al., 2000], and CHASEDB1 [Owen et al., 2009], while the OCTA data consist of ROSE-1 [Ma et al., 2021] and OCT-500 [Li et al., 2024]. We evaluate all vessel datasets using the Dice coefficient as the sole performance metric...
Researcher Affiliation Academia School of Computer Science, Wuhan University EMAIL
Pseudocode Yes Algorithm 1: Model training in FVAC Input: Communication rounds T, local epochs E, number of participants K, the kth participant private data Dk(x, y), private model θk Output: The final global model θT for t = 1, 2, ..., T do Participant Side; for k = 1, 2, ..., K in parallel do θt k Local Updating(θt, G) Server Side; θt+1 1 K PK k=1 θt k Local Updating(θt, G): θt k θt ; // Distribute global parameter θfixed θt ; // Fix global parameter for e = 1, 2, ..., E do for (Xi, Yi) Dk do Zi = f(Xi, θt k), Zg i = f(Xi, θfixed) Pi = σ(Zi), Pg i = σ(Zg i ) /* Uncertainty Estimation */ Ui (Pi, Yi), U g i (Pg i , Yi) in Eq. (6) /* Weight Assignment */ α (U g i , Ui) in Eqs. (7) and (8) /* Feature Decoupling */ Ffgi, Fbgi (Fi, Si) in Eq. (10) Fg fgi, Fg bgi (Fg i , Si) in Eq. (10) Lk F MUG (α, Pi, Yi) in Eq. (9) Lk F V DA (Ffgi, Fbgi, Fg fgi, Fg bgi) in Eq. (11) Lk F V AC (Lk F MUG, Lk F V DA)in Eq. (12) θe k θe k η Lk return θe k
Open Source Code No No clear statement or link for open-source code is provided in the paper. The 'Limitation' section discusses future work, but not code release for the current work.
Open Datasets Yes We evaluate FVAC on both color fundus and OCTA retinal datasets. The fundus data include DRIVE [Staal et al., 2004], STARE [Hoover et al., 2000], and CHASEDB1 [Owen et al., 2009], while the OCTA data consist of ROSE-1 [Ma et al., 2021] and OCT-500 [Li et al., 2024].
Dataset Splits Yes DRIVE [Staal et al., 2004] contains 40 high-resolution color retinal images (565 584), evenly split into 20 for training and 20 for testing. STARE [Hoover et al., 2000] contains 20 manually annotated color retinal images (700 605), with 16 used for training and 4 for testing. CHASEDB1 [Owen et al., 2009] contains 28 color retinal images with a resolution of 999 960 pixels, split into 20 images for training and 8 for testing. ROSE-1 [Ma et al., 2021], a subset of ROSE, includes 117 OCTA images from 39 subjects, with 90 for training and 27 for testing. OCT-500 [Li et al., 2024] provides OCTA images from 500 subjects across two fields of view, split into OCTA6MM (300 images) and OCTA-3MM (200 images). We use 280/20 for training/testing in OCTA-6MM and 180/20 in OCTA-3MM.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models or other computing infrastructure) used for running the experiments. It only mentions using U-Net as the backbone for the models.
Software Dependencies No The paper mentions using the Adam W optimizer, but does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow) that would be needed to replicate the experiments.
Experiment Setup Yes For the experiment, to ensure simplicity and general applicability, we choose U-Net, a widely used and effective architecture, as the backbone for all Federated Learning methods. To ensure the reproducibility and consistency of our results, we fix the random seed across all experiments. In the federated learning process, models are trained using the Adam W optimizer [Loshchilov and Hutter, 2019] with a batch size of 4. The communication round is set to 100, and the local training epoch is 5 for all datasets. We set up six participants per experiment with different combined datasets and randomly assign domains. For color fundus data, the split is DRIVE:3, STARE:1, CHASEDB1:1; for OCTA, ROSE-SVC:2, ROSE-SD:1, OCTA-3MM:1, OCTA-6MM:2. Each participant receives 1% of the original data from their assigned domains. We also apply data augmentations on each client, including color jitter, gamma correction, flipping, rotation, and random cropping. We use a learning rate of 1e-4 for both collaborative and local updates on the color retinal datasets across all methods. In our method, all participants share the same hyperparameters (i.e., β = 2). For the OCTA datasets, the learning rate is reduced to 1e-5 due to higher segmentation difficulty from varying modalities and denser vessel structures.