Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Networks

Authors: Ziqing Wang, Yuetong Fang, Jiahang Cao, Hongwei Ren, Renjing Xu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across 2D, 3D, event-driven classification tasks, object detection, and segmentation tasks, demonstrate the effectiveness of our method in various domains.
Researcher Affiliation Academia 1The Hong Kong University of Science and Technology (Guangzhou), China 2Northwestern University, USA EMAIL, EMAIL, EMAIL
Pseudocode No The paper mentions 'The detailed methodology and pseudo-code are provided in the Appendix.', but no pseudocode or algorithm blocks are present in the main body of the paper.
Open Source Code Yes Code https://github.com/bic-L/burst-ann2snn
Open Datasets Yes Extensive experiments across 2D, 3D, event-driven classification tasks, object detection, and segmentation tasks, demonstrate the effectiveness of our method in various domains. ...on CIFAR-10, CIFAR-100, and Image Net datasets, respectively. We evaluated the benefits of Adafire Neuron using the Image Net dataset (Deng et al. 2009). Tab. 3 presents our evaluation across various neuromorphic datasets, such as CIFAR10-DVS and N-Caltech101... We evaluated our method on the PASCAL VOC 2012 and MS COCO 2017 datasets...
Dataset Splits Yes The paper uses standard benchmark datasets such as CIFAR-10, CIFAR-100, Image Net, PASCAL VOC 2012, and MS COCO 2017, which have well-defined and commonly used training, validation, and test splits.
Hardware Specification No The paper discusses neuromorphic hardware in general terms (e.g., Intel's Loihi 2 and Synsense's Speck) as part of the SNN context, but it does not specify any particular GPU models, CPU models, or other hardware used for running its own experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers for any libraries, frameworks, or programming languages used in the experiments.
Experiment Setup No The paper defines parameters such as max burst-firing pattern (φ), threshold ratio (ρ), base boundary (αbase), scaling factor (β), and decay constant (δ), and mentions tuning Starget. However, it does not provide concrete hyperparameter values or detailed training configurations (e.g., learning rates, batch sizes, optimizers) in the main text for its experiments. Table 1 and 2 show results at different timesteps (T), but T is varied in the results rather than being a single setup parameter.