Neural Encoding and Decoding at Scale

Authors: Yizi Zhang, Yanchen Wang, Mehdi Azabou, Alexandre Andre, Zixuan Wang, Hanrui Lyu, International Brain Laboratory, Eva L Dyer, Liam Paninski, Cole Lincoln Hurwitz

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on the International Brain Laboratory (IBL) repeated site dataset (IBL et al., 2022), which consists of Neuropixels recordings targeting the same brain regions across 83 mice performing the same decision-making task. We benchmark NEDS on encoding and decoding of key task variables including whisker motion, wheel velocity, choice, and the block prior (Findling et al., 2023). We first demonstrate that NEDS outperforms an equivalent unimodal encoding and decoding method. We then compare NEDS to preexisting large-scale modeling approaches including POYO+ (Azabou et al., 2025) and NDT2 (Ye et al., 2024).
Researcher Affiliation Academia Yizi Zhang 1,4 Yanchen Wang 1 Mehdi Azabou1 Alexandre Andre2 Zixuan Wang1 Hanrui Lyu3 The International Brain Laboratory4 Eva Dyer2 Liam Paninski1,4 Cole Hurwitz1,4 1Columbia University 2University of Pennsylvania 3Northwestern University 4The International Brain Laboratory
Pseudocode No The paper describes the 'Generative process' with mathematical equations (Section 3.4) but does not include a block explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Project page and code: https://ibl-neds.github.io/
Open Datasets Yes For all analyses in this paper, we use the IBL repeated site dataset (IBL et al., 2022). This dataset consists of Neuropixels recordings collected from 10 labs with standardized experimental pipelines. ... We also evaluate our approach on a monkey reaching dataset (Pei et al., 2021) detailed in Appendix H.
Dataset Splits Yes For all experiments, we evaluate the performance of each model on 10 held-out animals. For these 10 animals, we split the trials into training (70%), validation (10%), and test (20%) sets.
Hardware Specification Yes The 74-session models are trained on 16 Nvidia RTX8000 GPUs (each with 48GB memory) in under 2 days for a total of 2000 epochs. Single-session multimodal NEDS and unimodal encoding NEDS can be trained on a single Nvidia A40 GPU in less than 2 hours, also for 2000 epochs. ... For the 50 hyper parameters search experiments, we use 1 Nvidia H100 GPU (80GB memory) per experiment. ... For the large scale pretraining experiment on 74 sessions, we use 4 Nvidia H200 GPUs for 4 hours and 30 minutes to reach 600 epochs.
Software Dependencies No The paper mentions specific tools like Ray Tune (Liaw et al., 2018), Weights & Biases (Biewald, 2020), and scikit-learn’s (Pedregosa et al., 2011) Grid Search CV function, but does not specify version numbers for these tools or other core software libraries (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes We conducted extensive hyperparameter tuning by initializing 50 random models with hyperparameters randomly selected from predefined ranges. The model with the best hyperparameters was chosen based on its validation set performance (see Appendix C for the hyperparameter ranges used in this experiment). ... Table 3. The range of possible NEDS model and optimizer hyperparameters from which Ray Tune randomly samples combinations. ... Table 4. Hyperparameters used for training 74-session multimodal NEDS.