BSO: Binary Spiking Online Optimization Algorithm

Authors: Yu Liang, Yu Yang, Wenjie Wei, Ammar Belatreche, Shuai Wang, Malu Zhang, Yang Yang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that both BSO and T-BSO achieve superior optimization performance compared to existing training methods for BSNNs. The codes are available at https://github.com/hamingsi/BSO. 5. Experiments In this section, we first present experimental details, including the utilized datasets, architectures, and settings. Second, we compare our methods with existing online training and BSNN approaches to evaluate their effectiveness. Thirdly, we conduct ablation studies to assess the efficiency improvements of BSO and T-BSO during the training process, as well as the performance advantages of T-BSO over BSO.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China 2Northumbria University. Correspondence to: Malu Zhang <EMAIL>.
Pseudocode Yes Algorithm 1 BSO and T-BSO for optimization.
Open Source Code Yes The codes are available at https://github.com/hamingsi/BSO.
Open Datasets Yes We validate our BSO and T-BSO on image classification tasks, including both static image datasets like CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), Image Net (Deng et al., 2009), as well as the neuromorphic dataset DVS-CIFAR10 (Li et al., 2017). Our exploration on the GSC dataset demonstrates T-BSO s superior performance.
Dataset Splits Yes CIFAR-10 is a dataset consisting of color images across 10 object categories, with 50,000 training samples and 10,000 testing samples. Each image is 32 × 32 pixels with three color channels. CIFAR-100 CIFAR-100 is a dataset similar to CIFAR-10 but contains 100 object categories instead of 10. It includes 50,000 training samples and 10,000 testing samples, with the same preprocessing applied as in CIFAR-10. Image Net Image Net-1K is a large-scale dataset consisting of color images across 1,000 object categories, with 1,281,167 training samples and 50,000 validation images. DVS-CIFAR10 ...We split the dataset into 9,000 training samples and 1,000 testing samples.
Hardware Specification No No specific hardware details (GPU models, CPU types, etc.) are mentioned in the paper.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For the training configuration, we apply SGD with a cosine annealing learning rate schedule. The initial learning rate is set to 0.1 and decays to 0 throughout the training process. Moreover, we set the threshold Vth and the decay factor λ in the LIF to 1 and 0.5. For the hyperparameters of our BSO and T-BSO, we set γ to 5 × 10−7, while β1 and β2 to 1 × 10−3 and 1 × 10−5. We provide the hyperparameter analysis for them in Sec. 5.4. When experimenting on the Image Net, we first pre-train the model using single time steps for 100 epochs, then fine-tune with T-BSO using 4-6 time steps for 30 epochs to accelerate training. Additional details are available in Appendix B. Table 6. Training hyperparameters about BSO Table 7. Training hyperparameters about T-BSO