Flow-field inference from neural data using deep recurrent networks

Authors: Timothy Doyeon Kim, Thomas Zhihao Luo, Tankut Can, Kamesh Krishnamurthy, Jonathan W. Pillow, Carlos D Brody

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using spike train data from frontal brain regions of rats performing an auditory decision-making task, we demonstrate that FINDR performs competitively with existing methods in capturing the heterogeneous responses of individual neurons. When trained to disentangle task-relevant and irrelevant activity, FINDR uncovers interpretable low-dimensional dynamics. These dynamics can be visualized as flow fields and attractors, enabling direct tests of attractor-based theories of neural computation. We suggest FINDR as a powerful method for revealing the low-dimensional task-relevant dynamics of neural populations and their associated computations.
Researcher Affiliation Academia 1Princeton Neuroscience Institute, Princeton, NJ 2Present address: Allen Institute & University of Washington, Seattle, WA 3School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 4Present address: Department of Physics, Emory University, GA 5Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ 6Howard Hughes Medical Institute, Princeton University, Princeton, NJ. Correspondence to: Timothy Doyeon Kim <EMAIL>, Carlos D. Brody <EMAIL>.
Pseudocode No The paper describes the model architecture and optimization process through textual descriptions and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available as a Git Hub repository: https://github.com/Brody-Lab/findr.
Open Datasets Yes We applied FINDR... to a dataset comprising 67 choice-selective neurons, selected from a larger population of 464 simultaneously recorded neurons from dorsomedial frontal cortex (dm FC) and medial prefrontal cortex (m PFC) of a rat engaged in a decision-making task across 448 trials (Luo et al., 2023).
Dataset Splits Yes We held out 13 neurons (about 20%) from this dataset, and partitioned the dataset into 5 different folds, each containing a subset of trials in random order. ... We used 3 of these folds for training, 1 fold for validation, and the remaining 1 fold for testing. We evaluated the 5-fold cross-validated log-likelihood of held-out neural activity to measure model performance.
Hardware Specification No We thank Jonathan Halverson for help with using the Princeton HPC clusters. While 'Princeton HPC clusters' indicates a computing resource, it lacks specific hardware details like GPU/CPU models, processor types, or memory amounts.
Software Dependencies No For SLDS and r SLDS, we used code from https://github.com/lindermanlab/ssm. For auto LFADS, we used code from https://github.com/arsedler9/lfads-torch, with hyperparameter search configurations in configs/pbt.yaml. For GPFA, we used Elephant: https://github.com/Neural Ensemble/ elephant. For CEBRA, we used code from https://github.com/Adaptive Motor Control Lab/cebra. We fit a Euclidean-distance CEBRA-Time model using hyperparameters from https://cebra.ai/docs/demo_ notebooks/CEBRA_best_practices.html#Items-to-consider, but with changes to three hyperparameters (model architecture="offset10-model-mse", max iterations=1000, output dimension=2). While specific software packages and their repositories are mentioned, no version numbers are provided for these tools or for any core programming languages or libraries (e.g., Python, PyTorch).
Experiment Setup Yes We train for a total of 3000 epochs and minimize loss using mini-batch gradient descent with warm restart (Loshchilov & Hutter, 2017). The learning rate increases from 0 to η linearly for 10 epochs every Dcyclei = 2i 1D epochs, where i goes from 1 to iend. After the 10 epochs, the learning rate decays in a cosine manner, where at Dcyclei, the learning rate becomes 0. iend is determined by the minimum Piend i=1 Dcyclei which is greater than or equal to 3000. D is set to be 200. ... Here, η {10 2.0, 10 1.625, 10 1.25, 10 0.875, 10 0.5}, HFNN {30, 50, 100}, and HRNN {50, 100, 200}. ... We set the coefficient of the ℓ2 regularization on the weights of all model parameters to be 10 7. ... We set the time constant τ = 0.1s. We set β = 2. We set the number of trials in a mini-batch to be 25. We set the momentum in mini-batch gradient descent to be 0.9. We perform annealing to the KL term in Equation (22). Specifically, the KL term is multiplied by 1 0.99iteration #.