Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA
Authors: Changmin Yu, Maneesh Sahani, Máté Lengyel
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that the infinite GPFA model correctly infers dynamically changing activations of latent factors on a synthetic dataset. By fitting the infinite GPFA model to population activities of hippocampal place cells during spatial tasks with alternating random foraging and spatial memory phases, we identify novel nontrivial and behaviourally meaningful dynamics in the neural encoding process. |
| Researcher Affiliation | Academia | 1Computational and Biological Learning Lab, Department of Engineering, University of Cambridge 2Gatsby Computational Neuroscience Unit, UCL 3Center for Cognitive Computation, Department of Cognitive Science, Central European University |
| Pseudocode | No | The paper describes methods and processes using mathematical formulations and descriptive text, but it does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Python implementation of the infinite GPFA model can be access through this repo: https://github.com/changmin-yu/infinite_gpfa. |
| Open Datasets | Yes | We apply our model to simultaneously recorded population activities of 204 place cells from rat hippocampal CA1, whilst the rat is performing a spatial memory task (Pfeiffer and Foster, 2013). |
| Dataset Splits | No | The paper describes generating synthetic data and processing a single neural recording session (e.g., binning spike trains), but it does not specify explicit training, validation, or test splits for model evaluation or reproduction of experiments for either dataset. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and PyTorch but does not provide specific version numbers for these or any other software libraries or frameworks. It refers to a GitHub repository for implementation but the paper text itself lacks version details. |
| Experiment Setup | Yes | All models are trained with Adam optimiser (Kingma and Ba, 2014), with learning rate 0.01. For the main experimental evaluations, we train all models over 2000 epochs. The infinite GPFA model further places a Gamma prior on α, with s1 = 1.0 and s2 = 1.0 (Equation S.13). We set the number of inducing points to be 30 for the main evaluations... For all models, we use the squared exponential (SE) kernels, with trainable scale and lengthscale parameters. The initial scale and lengthscale parameters are s0 d = 1.0 and τ 0 d = 0.005 (in time domain) for all models. For all implemented models, we set the latent dimensions, D, to be 10. |