FSTA-SNN:Frequency-Based Spatial-Temporal Attention Module for Spiking Neural Networks

Authors: Kairong Yu, Tianqing Zhang, Hongwei Wang, Qi Xu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results indicate that the introduction of the FSTA module significantly reduces the spike firing rate of SNNs, demonstrating superior performance compared to state-of-the-art baselines across multiple datasets. We perform comprehensive experiments to assess the proposed method and compare it with other recent SOTA methods on several widely used architectures.
Researcher Affiliation Academia 1Zhejiang University-University of Illinois Urbana Champaign Institute, Zhejiang University, Haining, China 2The College of Computer Science and Technology, Zhejiang University, Hangzhou, China 3School of Computer Science and Technology, Dalian University of Technology, Dalian, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper includes equations and diagrams illustrating the proposed FSTA module and its submodules (Figure 2), but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/yukairong/FSTA-SNN
Open Datasets Yes These experiments utilize static datasets CIFAR-10, CIFAR-100 (Krizhevsky, Nair, and Hinton 2010), and Image Net (Deng et al. 2009), as well as the dynamic dataset CIFAR10-DVS (Li et al. 2017) dataset.
Dataset Splits No The paper mentions using established datasets like CIFAR-10, CIFAR-100, Image Net, and CIFAR10-DVS, which typically have standard splits. However, it does not explicitly provide the specific training/test/validation split percentages or methodology used for its experiments in the main text.
Hardware Specification No The paper discusses energy costs (ACs, MACs, FLOPs, Energy (mJ)) in Table 5, but it does not specify any particular hardware components such as GPU models, CPU types, or memory used for conducting the experiments.
Software Dependencies No The paper does not explicitly mention any specific software dependencies or their version numbers (e.g., programming languages like Python, or libraries like PyTorch/TensorFlow with their versions).
Experiment Setup No The paper mentions using ResNet20 as a baseline and discusses parameters like 'time step' and 'kernel sizes' (3x3, 5x5, 7x7). However, it does not provide a comprehensive list of hyperparameters such as learning rate, batch size, optimizer details, or the number of training epochs, which are essential for fully reproducing the experimental setup.