A Fast and Accurate ANN-SNN Conversion Algorithm with Negative Spikes
Authors: Xu Wang, Dongchen Zhu, Jiamao Li
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results verify the effectiveness of the proposed algorithm. (Abstract) 5 Experiment Results Table 1: Model Accuracy on the Fashion-MNIST dataset. Table 2: Model Accuracy of VGG-16 on the CIFAR-10 dataset. Table 3: Model Accuracy of Res Net-18 on the CIFAR-10 dataset. Table 4: Firing rate of each layer in Net4. Table 5: Weighting method for the calibration of SNN models in the temporal domain. 5.3 Ablation Study |
| Researcher Affiliation | Academia | 1Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences EMAIL The email domain `@mail.sim.ac.cn` and institutions "Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences" and "University of Chinese Academy of Sciences" clearly indicate academic affiliations. |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations, such as equations (1) to (40) and descriptions of algorithms like "Threshold Optimization" and "Joint Calibration of Multiple Layers". However, it does not present any clearly labeled pseudocode blocks or algorithms with structured, step-by-step formatting in a code-like manner. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, a link to a code repository (e.g., GitHub), or a mention of code being provided in supplementary materials. |
| Open Datasets | Yes | Table 1 shows the model accuracy of the state-of-the-art algorithms that support negative spikes on the Fashion-MNIST dataset. Table 2 shows the model accuracy of VGG-16 on the CIFAR-10 dataset, where acc is the conversion loss in terms of model accuracy. Table 3 shows the model accuracy of Res Net-18 on the CIFAR-10 dataset. |
| Dataset Splits | No | The paper mentions the use of "Fashion-MNIST dataset" and "CIFAR-10 dataset" but does not explicitly state the training/test/validation splits used, nor does it refer to specific predefined splits with citations within the main text. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions software components like "sigmoid function" and "Kullback-Leibler (KL) Divergence" in the context of their method. However, it does not provide specific version numbers for any libraries, frameworks, or operating systems used in their experimental setup. |
| Experiment Setup | Yes | 5.1 Implementation Details Variance Penalty. When fine-tuning the model with the variance penalty (23), the total loss function can be written as Λ = Λc + λ1η (31) where Λc is the original loss function adopted in the training of the ANN model, and λ1 is the weighting factor. The value of η may change dramatically from epoch to epoch during the training. So it is suggested to use an adaptive weighting factor λ1 = 0.1|Λc|/η. The weighting factor λ1 is calculated by detaching from the computation graph of Λc and η. Surrogate Function. ... In the experiment, κ is set to 6.7, and is increased to 10 after 10 epochs, and is increased to 15 at the 20-th epoch. |