HoSNNs: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
Authors: Hejia Geng, Peng Li
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | When trained with weak FGSM attacks (ϵ = 2/255), our Ho SNNs significantly outperform conventionally trained LIF-based SNNs across multiple datasets. Furthermore, under significantly stronger PGD7 attacks (ϵ = 8/255), Ho SNN achieves notable improvements in accuracy, increasing from 30.90% to 74.91% on Fashion MNIST, 0.44% to 36.82% on SVHN, 0.54% to 43.33% on CIFAR10, and 0.04% to 16.66% on CIFAR100. |
| Researcher Affiliation | Academia | Hejia Geng EMAIL Department of Electrical and Computer Engineering University of California, Santa Barbara; Peng Li EMAIL Department of Electrical and Computer Engineering University of California, Santa Barbara |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual explanations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | Our code is adapted from Zhang & Li (2020). (This statement indicates their code is based on existing work but does not confirm that their specific implementation for this paper is open-source or provide access details.) |
| Open Datasets | Yes | The proposed Ho SNNs are compared with LIF-based SNNs with identical architecture across four benchmark datasets: Fashion-MNIST (FMNIST) (Xiao et al., 2017), Street View House Numbers (SVHN) (Netzer et al., 2011) CIFAR10 and CIFAR100 (Krizhevsky, 2009). |
| Dataset Splits | No | The paper lists benchmark datasets like Fashion-MNIST, SVHN, CIFAR10, and CIFAR100 but does not explicitly provide the training/test/validation split percentages, sample counts, or specific methodology used for splitting these datasets in the main text or appendix. |
| Hardware Specification | Yes | The experiment used four NVIDIA A100 GPUs. |
| Software Dependencies | No | The paper mentions using the BPTT learning algorithm, Adam optimizer, and a sigmoid surrogate gradient, but it does not specify version numbers for key software components like Python, PyTorch/TensorFlow, or CUDA. |
| Experiment Setup | Yes | Hyperparameters for LIF and TA-LIF neurons included a simulation time T = 5 , a Membrane Voltage Constant τm = 5, and a Synapse Constant τs = 3. For the TA-LIF results in the main text, we assigned θi initialization values of 5 for Fashion MNIST, SVHN, CIFAR10 and 3 for CIFAR100. All neurons began with an initial threshold of 1. The step function was approximated using σ(x) = 1 / (1+e^-5x), where x = u(t) - Vth(t) and the BPTT learning algorithm was employed. For TA-ALIF neurons, the learning rate for θi was set at 1/10 of the rate designated for weights, ensuring hyperparameter stability during training. We also constrained θi to remain non-negative during optimization, ensuring a possible transition from TA-LIF to LIF. We utilized the Adam optimizer with hyperparameters betas set to (0.9, 0.999), and the lr = 5e-4 with cosine annealing learning rate scheduler (T = epochs). We set batch size to 64 and trained for 200 epochs. Regarding adversarial attack, we use an array of attack strategies, including FGSM, RFGSM, PGD, and BIM. For both CIFAR10 and CIFAR100, we allocated an attack budget with ϵ = 8/255. For iterative schemes like PGD, we set α = 2.5 ϵ/steps and steps = 7, 20, 40, aligning with the recommendations in Ding et al. (2022). For the adversarial training phase, FGSM training was used with ϵ values of 2/255 for CIFAR10 as per Ding et al. (2022) and 4/255 for CIFAR100, following Kundu et al. (2021). |