Efficient Inference for Dynamic Flexible Interactions of Neural Populations

Authors: Feng Zhou, Quyu Kong, Zhijie Deng, Jichao Kan, Yixuan Zhang, Cheng Feng, Jun Zhu

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first conduct experiments to compare Gibbs sampler, EM algorithm and mean-field approximation for SNMHP and dynamic-SNMHP on synthetic spike data; and further experiment on real neural recordings to demonstrate that the developed models achieve superior performance over the state-of-the-art competitors.
Researcher Affiliation Collaboration Feng Zhou1,2 EMAIL Quyu Kong3,4 EMAIL Zhijie Deng1 EMAIL Jichao Kan4 EMAIL Yixuan Zhang4 EMAIL Cheng Feng2,5 EMAIL Jun Zhu1,2 EMAIL 1Dept. of Comp. Sci. & Tech., BNRist Center, THU-Bosch Joint ML Center, Tsinghua University 2THU-Siemens Joint Research Center for Industrial Intelligence and Internet of Things 3Research School of Computer Science, Australian National University 4Data Science Institute, University of Technology Sydney 5Siemens AG
Pseudocode Yes The pseudocode is provided in Algorithm 1. The pseudocode is provided in Algorithm 2. The pseudocode is provided in Algorithm 3.
Open Source Code Yes The implementation code is publicly available at https://github.com/zhoufeng6288/DFN-Hawkes.
Open Datasets Yes In this section, we analyze the performance of SNMHP (single-state dynamic-SNMHP) and dynamic-SNMHP on a real multi-neuron evoked spike train data in cat primary visual cortex by visual stimulation. The neural data (Blanche, 2009) was recorded by Tim Blanche in the laboratory of Nicholas Swindale, University of British Columbia, and downloaded from the Collaborative Research in Computational Neuroscience data sharing website. ... In this section, we use the proposed SNMHP and dynamic-SNMHP to analyze a more challenging real multi-neuron spike train dataset which contains 50 neurons. In the frontal cortex of male Long-Evans rats, the spike train data (Watson et al., 2016) was recorded by silicon probe electrodes.
Dataset Splits Yes We extract the spikes in the time window [τ 100, τ] (time unit: 1s) as the wake-state training data, [τ 200, τ 100] as the wake-state test data, [τ, τ + 100] as the sleep-state training data, and [τ + 100, τ + 200] as the sleep-state test data. We concatenate the training (test) sequences in two states in chronological order to constitute a two-state training (test) data on [0, 200]. The training dataset contains 30510 spikes and test dataset contains 31872 spikes, respectively.
Hardware Specification Yes We use a workstation with Intel Xeon Gold 6240R CPU and Nvidia Quadro RTX 6000 GPU for training these models.
Software Dependencies No our methods are implemented in Python. For vanilla Hawkes processes, we use two methods, numerical differentiation and analytical expressions (Ozaki, 1979), to compute the gradient of log-likelihood; and use the SLSQP method in scipy.optimize.minimize for optimization. For IN-Hawkes, we implement the log-likelihood by ourselves, use autograd to compute the gradient and use SLSQP for optimization.
Experiment Setup Yes For hyperparameters, because the ground-truth basis functions are known, the number, support and parameters of basis functions are chosen as the ground truth. By cross validation, the hyperparameter α is chosen to be 0.2 for the three algorithms. The number of grids in the Gibbs sampler, quadrature nodes in the EM algorithm and mean-field approximation is set to 2000, and the number of iterations for three inference algorithms is set to 200, which is large enough for convergence.