Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

Authors: Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we aim to answer these questions by conducting a representation similarity analysis between SNNs and ANNs using Centered Kernel Alignment (CKA). We examine various aspects of representation similarity between SNNs and ANNs, including spatial and temporal dimensions, input data type, and network architecture. Our contributions and findings include: We analyze the representation similarity between SNNs and ANNs. As shown in Fig. 2, the CKA heatmap emerges as a checkerboard-like grid structure. Table 1: The top-1 accuracy of SNNs and ANNs on CIFAR-10 dataset.
Researcher Affiliation Academia Yuhang Li EMAIL Yale University; Youngeun Kim EMAIL Yale University; Hyoungseob Park EMAIL Yale University; Priyadarshini Panda EMAIL Yale University
Pseudocode No The paper describes methods and algorithms (e.g., CKA, BPTT) using narrative text and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. Figure 1 illustrates a workflow but is not formatted as pseudocode.
Open Source Code Yes Code is released at https://github.com/Intelligent-Computing-Lab-Yale/ SNNCKA.
Open Datasets Yes Our primary study case is Res Net with identity mapping block (He et al., 2016b) on the CIFAR10 dataset... We also provide RSA on VGG-series networks in Sec. A.1 and RSA on CIFAR100 dataset in Sec. A.2. We choose CIFAR10-DVS (Li et al., 2017), N-Caltech 101 (Orchard et al., 2015) and train spiking/artificial Res Net-20.
Dataset Splits No The paper mentions using several standard datasets such as CIFAR-10, CIFAR-100, CIFAR10-DVS, and N-Caltech 101. However, it does not explicitly state the specific train/test/validation split percentages, sample counts, or cite a source for the exact splitting methodology within the provided text. For example, it says: 'Our primary study case is Res Net with identity mapping block (He et al., 2016b) on the CIFAR10 dataset, which is the standard architecture and dataset in modern deep learning for image recognition.' While these datasets have standard splits, the paper does not provide the required specific details.
Hardware Specification No The paper does not explicitly specify any hardware details such as GPU models (e.g., NVIDIA A100), CPU models, or memory configurations used for running the experiments. It mentions that "Detailed training setup and codes can be found in the supplementary material," implying these details might be in the supplementary content, which is not part of the provided text.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as Python versions or library versions (e.g., PyTorch 1.9, TensorFlow 2.x). It mentions that "Detailed training setup and codes can be found in the supplementary material," implying these details might be in the supplementary content, which is not part of the provided text.
Experiment Setup Yes For default SNN training, we use direct encoding, τ = 0.5 for the leaky factor, vth = 1.0 for the firing threshold, T = 4 for the number of time steps, and α = 1.0 for surrogate gradient, which are tuned for the best training performance on SNNs. All models are trained with 300 epochs of stochastic gradient descent. The learning rate is set to 0.1 followed by a cosine annealing decay. The weight decay is set to 0.0001 for all models.