BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos

Authors: Eleanor Batty, Matthew Whiteway, Shreya Saxena, Dan Biderman, Taiga Abe, Simon Musall, Winthrop Gillis, Jeffrey Markowitz, Anne Churchland, John P. Cunningham, Sandeep R. Datta, Scott Linderman, Liam Paninski

NeurIPS 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate this framework on two different experimental paradigms using distinct behavioral and neural recording technologies.
Researcher Affiliation Academia Eleanor Batty*, Matthew R Whiteway*, Shreya Saxena, Dan Biderman, Taiga Abe Columbia University erb2180,m.whiteway,ss5513,db3236,ta2507 @columbia.edu Simon Musall Cold Spring Harbor EMAIL Winthrop Gillis Harvard Medical School EMAIL Jeffrey E Markowitz Harvard Medical School EMAIL Anne Churchland Cold Spring Harbor EMAIL John Cunningham Columbia University EMAIL Sandeep Robert Datta Harvard Medical School EMAIL Scott W Linderman Stanford University EMAIL Liam Paninski Columbia University EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes A python implementation of our pipeline is available at https://github.com/ebatty/behavenet, which is based on the PyTorch [46], ssm [47], and Test Tube [48] libraries.
Open Datasets Yes Widefield Calcium Imaging (WFCI) dataset [8, 19]. ... Neuropixels (NP) dataset [9, 18].
Dataset Splits Yes Training terminates when MSE on held-out validation data, averaged over the previous 10 epochs, begins to increase.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No A python implementation of our pipeline is available at https://github.com/ebatty/behavenet, which is based on the PyTorch [46], ssm [47], and Test Tube [48] libraries. Specific version numbers for these libraries are not provided.
Experiment Setup Yes We train the autoencoders by minimizing the mean squared error (MSE) between original and reconstructed frames using the Adam optimizer [39] with a learning rate of 10^-4. Models are trained for a minimum of 500 epochs and a maximum of 1000 epochs. Training terminates when MSE on held-out validation data, averaged over the previous 10 epochs, begins to increase.