Mitigating Catastrophic Forgetting in Spiking Neural Networks through Threshold Modulation

Authors: Ilyass Hammouamri, Timothée Masquelier, Dennis George Wilson

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on different datasets show that the neurmodulated SNN can mitigate forgetting significantly with respect to a fixed threshold SNN. We also show that the evolved Neuromodulatory Network can generalize to multiple new scenarios and analyze its behavior.
Researcher Affiliation Academia Ilyass Hammouamri EMAIL Cer Co CNRS UMR 5549, Université Toulouse III, Toulouse, France Timothée Masquelier EMAIL Cer Co CNRS UMR 5549, Université Toulouse III, Toulouse, France Dennis Wilson EMAIL ISAE-Supaero, Université de Toulouse, Toulouse, France
Pseudocode Yes Algorithm 1 : Neuromodulated training step
Open Source Code Yes Code available at https://github.com/Thvnvtos/Nm-SNN
Open Datasets Yes We use two different type of datasets: a neuromorphic dataset DVS128 Gesture (Amir et al., 2017) composed of hand gestures captured with an event-based camera, and static image datasets EMNIST (Extended-MNIST) letters (Cohen et al., 2017) composed of images of handwritten uppercase and lowercase letters and MNIST (Deng, 2012).
Dataset Splits Yes Moreover, each dataset D is divided into 80% training instances and 20% testing instances. [...] For DVS128 Gesture, the evolution configuration is as follows: we use the classes from Devo with n = 3 sequential tasks, each class is learned through 20 SGD updates where we have k = 40 instances of each class and a batch size of 2, each instance consists of T = 16 frames [...] For the third test, due to having only 11 different classes, we mix Devo and Dtest [...] to obtain n = 6 sequential tasks, we note this dataset as D6.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU model, CPU type, memory).
Software Dependencies No We used Spiking Jelly (Fang et al., 2020) which is a Py Torch-based open-source deep learning framework for SNNs. This mentions the framework but not specific version numbers for Spiking Jelly or PyTorch.
Experiment Setup Yes For DVS128 Gesture, the evolution configuration is as follows: we use the classes from Devo with n = 3 sequential tasks, each class is learned through 20 SGD updates where we have k = 40 instances of each class and a batch size of 2, each instance consists of T = 16 frames [...] The evolution of the Nm N lasted approximately 600 generations until convergence. [...] EMNIST letters [...] Each class is learned through 20 SGD updates with a batch size of 4 and 80 instances. [...] We ran the evolution for approximately 1000 generations using the next 5 letters.