Exploiting Label Skewness for Spiking Neural Networks in Federated Learning

Authors: Di Yu, Xin Du, Linshan Jiang, Huijing Zhang, Shuiguang Deng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with three different structured SNNs across five datasets (i.e., three non-neuromorphic and two neuromorphic datasets) demonstrate the efficiency of Fed LEC.
Researcher Affiliation Academia 1Zhejiang University 2National University of Singapore EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Aggregation Through Time in Fed LEC Input: Selected Client Set S, Total Timesteps T Output: Updated SNN Parameters θ.
Open Source Code Yes 1https://github.com/AmazingDD/FedLEC
Open Datasets Yes Extensive experiments with three different structured SNNs across five datasets (i.e., three non-neuromorphic and two neuromorphic datasets) demonstrate the efficiency of Fed LEC. ... Specifically, in cifar10 dataset, Fed LEC reaches an average improvement...When the task extends to more complex classification tasks such as cifar100...Additionally, Fed LEC demonstrates potential advantages in handling distribution-based label skews... Results on Event-Based Datasets. We also apply Fed LEC to event-based image classification tasks...
Dataset Splits No The paper describes how data is distributed across clients for label skew settings (e.g., 'p Dir(0.05)' or '#cnum = 2'), and client participation rates ('selecting 20% parts from 10 clients'). However, it does not provide specific train/validation/test dataset splits for reproducibility.
Hardware Specification Yes All experimental trials are implemented on NVIDIA GeForce RTX 4090 GPUs.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'S-VGG9' as the default backbone SNN model, but does not provide specific version numbers for any software libraries, frameworks, or programming languages (e.g., PyTorch, Python, CUDA versions).
Experiment Setup Yes We uniformly execute 50 communication rounds, selecting 20% parts from 10 clients to train their models for 10 local epochs per round with a batch size of 64 as default. We use the Adam optimizer for all trials with a learning rate 0.001.