Efficient ANN-SNN Conversion with Error Compensation Learning
Authors: Chang Liu, Jiangrong Shen, Xuming Ran, Mingkun Xu, Qi Xu, Yi Xu, Gang Pan
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on CIFAR-10, CIFAR-100, Image Net datasets show that our method achieves high-precision and ultra-low latency among existing conversion methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Dalian University of Technology, Dalian, China 2Faculty of Electronic and Information Engineering, Xi an Jiaotong University 3State Key Lab of Brain-Machine Intelligence, Zhejiang University 4National University of Singapore 5Guangdong Institute of Intelligence Science and Technology, Zhuhai, China 6College of Computer Science and Technology, Zhejiang University. Correspondence to: Qi Xu <EMAIL>. |
| Pseudocode | No | The paper describes methods and equations but does not contain explicitly structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | Experimental results on CIFAR-10, CIFAR-100, Image Net datasets show that our method achieves high-precision and ultra-low latency among existing conversion methods. |
| Dataset Splits | No | The paper mentions using CIFAR-10, CIFAR-100, and Image Net datasets but does not explicitly provide details about training/test/validation splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory amounts) used for conducting the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | In our dual threshold neuron method, the quantization steps L is a hyperparameter that affects the accuracy of the converted SNN. To better understand the impact of L on SNN performance and determine the optimal value, we trained VGG16, Res Net-20, and Res Net-18 networks with a pruning function with a learnable threshold λ using different quantization steps L, including 2, 4, 8, 16, and 32, and then converted them to SNNs. ... this paper sets the negative threshold to a small negative value (-1e-3 according to experience). |