Comparing noisy neural population dynamics using optimal transport distances

Authors: Amin Nejatbakhsh, Victor Geadah, Alex Williams, David Lipshutz

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here, we use the metric to compare models of neural responses in different regions of the motor system and to compare the dynamics of latent diffusion models for text-to-image synthesis. We apply our method to compare simplified models of motor systems (Sec. 4.2) and the dynamics of conditional latent diffusion models (Sec. 4.3).
Researcher Affiliation Academia 1 Center for Computation Neuroscience, Flatiron Institute 2 Applied and Computational Mathematics, Princeton University 3 Center for Neural Science, New York University 4 Department of Neuroscience, Baylor College of Medicine
Pseudocode Yes We provide an alternating minimization algorithm for computing the distance between two processes using their firstand second-order statistics (Appx. B). Algorithm 1: Alternating minimization for computing causal OT distance
Open Source Code Yes Source code: https://github.com/amin-nejat/netrep.
Open Datasets No To test our framework, we consider two pretrained text-to-image latent diffusion models (v1-1 and v1-2)3 trained to generate text-conditional images from noise (Rombach et al., 2022). Models were taken from https://huggingface.co/Comp Vis. The paper describes using public models to generate data for experiments, but does not provide access information for the generated data itself as a dataset, nor for other datasets used in their experiments.
Dataset Splits No For each prompt and each model, we generated 60 latent trajectories and decoded those trajectories into the image space (examples of two decoded trajectories are shown in Fig. 5 and for several other trajectories in Fig. 7 of the supplement). We repeated this process for 3 random seeds to use the within-category distances as a baseline. This provided 60 datasets (2 diffusion models, 10 prompts, 3 seeds per prompt) each containing 60 latent trajectories. The paper describes how data was generated for the experiments, not how a pre-existing dataset was split into training/validation/test sets.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud computing instance types) are mentioned in the paper for running the experiments.
Software Dependencies No We implemented DSA using the code provided in the Git Hub respository https://github.com/mitchellostrow/DSA. The paper mentions using code from a GitHub repository for DSA, but it does not specify software dependencies with version numbers for their own methodology or the overall experimental setup.
Experiment Setup Yes For DSA we chose the hyperparameters n delays = 9, rank = 10. We fixed all the other hyperparameters to the following for this and all other DSA experiments: delay interval = 1, lr = 0.01, iters = 1000. ... we first projected each set of latent trajectories onto its top 10 principal components (PCs) before computing the distances between the 10-dimensional stochastic trajectories.