Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks
Authors: Yuchen Tian, Samuel Tensingh, Jason Eshraghian, Nhan Duy Truong, Omid Kavehei
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | evaluations on benchmarks like CIFAR-10 (+0.42%), CIFAR-100 (+0.99%), CIFAR10-DVS (+0.1%), and Image Net-1K (+0.73%) demonstrated consistent accuracy gains |
| Researcher Affiliation | Collaboration | Yuchen Tian EMAIL School of Biomedical Engineering, The University of Sydney, Sydney, NSW, Australia; Nhan Duy Truong EMAIL School of Biomedical Engineering, The University of Sydney, Sydney, NSW, Australia |
| Pseudocode | Yes | Algorithm 1 DA-SSDP (post-step correction; used in our experiments) |
| Open Source Code | Yes | Our code is available at https://github.com/NeuroSyd/DA-SSDP. |
| Open Datasets | Yes | evaluations on benchmarks like CIFAR-10 (+0.42%), CIFAR-100 (+0.99%), CIFAR10-DVS (+0.1%), and Image Net-1K (+0.73%) demonstrated consistent accuracy gains |
| Dataset Splits | Yes | Event-based CIFAR10-DVS is loaded from Spiking Jelly in frame representation with T frames per sample and split 90%/10% for train/test. Backbones follow Spikingresformer with T=4 steps on CIFAR-10/100 (Krizhevsky et al., 2009) and Image Net-1k (Deng et al., 2009) |
| Hardware Specification | Yes | The setup involved a workstation with dual NVIDIA RTX 3090 GPUs, employing PyTorch 1.12.1 with CUDA 11.3 and NumPy 1.24.4. |
| Software Dependencies | Yes | The setup involved a workstation with dual NVIDIA RTX 3090 GPUs, employing PyTorch 1.12.1 with CUDA 11.3 and NumPy 1.24.4. |
| Experiment Setup | Yes | DA-SSDP uses a warm-up of Ewarm=100 (for CIFAR10-DVS, Ewarm=80) epochs to fit the gate, and the fitted gate is then kept fixed for the remaining epochs. Kernel parameters are A+=1.5 10 3, A =1.0 10 4, and a learnable σ. CIFAR-100: Model: spikingresformer_cifar; input: 3 32 32; epochs: 600; batch size: 200; T: 4; optimizer: Adam W with lr = 5e 4 and weight decay 0.01; augmentation: Rand Augment rand-m7-n1-mstd0.5-inc1; mixup: on; cutout: off; label smoothing: 0.1; AMP: on; Sync BN: off. |