GRSN: Gated Recurrent Spiking Neurons for POMDPs and MARL
Authors: Lang Qin, Ziming Wang, Runhao Jiang, Rui Yan, Huajin Tang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In order to test the performance of the proposed gated recurrent spiking neurons (GRSN) and temporal alignment paradigm (TAP), we conducted experiments in partially observable (PO) and multi-agent environments. Our main contributions are summarized as follows: ... Experimental results show that GRSN can outperform original spiking neurons in benchmark environments and can achieve similar performance as RNN-RL with about 50% energy consumption. ... Section 5 Experiments |
| Researcher Affiliation | Academia | Lang Qin1,2, Ziming Wang1,2, Runhao Jiang1,2, Rui Yan3*, Huajin Tang1,2,4 1College of Computer Science and Technology, Zhejiang University 2The State Key Lab of Brain-Machine Intelligence, Zhejiang University 3College of Computer Science and Technology, Zhejiang University of Technology 4MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University EMAIL, EMAIL, Rh EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | The pseudocode of GRSN with the temporal alignment paradigm is shown in Algorithm 1. Algorithm 1: GRSN with temporal alignment paradigm |
| Open Source Code | Yes | Code https://github.com/Still Wolf/GRSN-SNN |
| Open Datasets | Yes | The Pendulum and Cart Pole tasks are classic control tasks for evaluating RL algorithms. ... Star Craft Multi-Agent Challenge (SMAC) is a benchmark environment for evaluating MARL algorithms (Samvelyan et al. 2019). |
| Dataset Splits | No | The paper describes experimental setups in simulation environments (Pendulum, CartPole, SMAC) where data is generated dynamically through interaction, rather than from a fixed dataset with predefined splits. It specifies the number of independent experiments (five) and testing runs (ten times for each final model) and training steps (10 million for SMAC), but does not provide traditional training/test/validation dataset split percentages or counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions specific algorithms and models used (e.g., SAC, TD3, QMIX, GRU, MLP) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the implementation (e.g., PyTorch, TensorFlow, Python, CUDA). |
| Experiment Setup | Yes | We conducted independent experiments on five different random seeds for each method and tested each final model ten times to eliminate the interference of randomness. ... For all experiments w/o TAP, SNNs use repeated input and rate coding, and the time step is set to T = 4. ... training 10 million steps per experiment. |