Differential Coding for Training-Free ANN-to-SNN Conversion
Authors: Zihan Huang, Wei Fang, Tong Bu, Peng Xue, Zecheng Hao, Wenxuan Liu, Yuanhong Tang, Zhaofei Yu, Tiejun Huang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various Convolutional Neural Networks (CNNs) and Transformers demonstrate that the proposed differential coding significantly improves accuracy while reducing energy consumption, particularly when combined with the threshold iteration method, achieving state-of-the-art performance. In this section, we first evaluate the performance of our proposed method on Image Net dataset across different models, comparing our results with state-of-the-art ANN-to-SNN conversion methods. |
| Researcher Affiliation | Academia | 1School of Computer Science, Peking University, Beijing, China 2School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, China 3Peng Cheng Laboratory, Shenzhen, China 4Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China 5Institute for Artificial Intelligence, Peking University, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 Threshold iteration method to find the best threshold; Algorithm 2 Differential Coding with Graded Units and Spiking Neurons (DCGS) Conversion Method; Algorithm 3 Algorithm of MT Neuron on GPU |
| Open Source Code | Yes | The source codes of the proposed method are available at https://github. com/h-z-h-cell/ANN-to-SNN-DCGS. |
| Open Datasets | Yes | We conducted conversion experiments on 11 different CNNs and Transformers using the Imagenet dataset. We evaluated the performance of our approach for object detection task on the COCO dataset using three different models provided by torchvision in various parameter settings. Additionally, we evaluated our method for semantic segmentation task on the Pascal VOC dataset using two different models provided by torchvision. |
| Dataset Splits | No | The paper mentions using Imagenet, COCO, and Pascal VOC datasets but does not explicitly provide specific training/test/validation dataset split information (e.g., exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology). |
| Hardware Specification | No | The paper states 'For simplicity, we use implementation 1 on GPUs' and discusses an 'Algorithm of MT Neuron on GPU', but does not provide specific GPU models or other hardware details used for running the experiments. |
| Software Dependencies | No | The paper mentions 'torch.float32 data type' and 'torch.int32' which suggests the use of PyTorch, but no specific version numbers for any software dependencies are provided. |
| Experiment Setup | Yes | Table 1 and 3 list configurations including 'Time-step T' (e.g., 2, 4, 8, 16, 32, 64), 'n' (number of positive and negative thresholds, e.g., 1, 4, 8), and 'Threshold scale c' (e.g., 1, 4), which are specific settings for the converted SNN models. |