Differentiable Optimization of Similarity Scores Between Models and Brains
Authors: Nathan Cloos, Moufan Li, Markus Siegel, Scott Brincat, Earl Miller, Guangyu Robert Yang, Christopher Cueva
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We analyzed neural data from studies on nonhuman primates (Figure 1) and compared the neural responses to task-optimized recurrent neural networks (RNNs), or synthetic datasets, with different similarity scores. In order to study what drives high similarity scores we directly optimize the synthetic datasets to maximize their similarity to the neural datasets as assessed by different similarity measures. |
| Researcher Affiliation | Academia | 1MIT 2NYU 3HIH Tübingen EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and procedures in paragraph text, for example, in Section 3.2, it states "We initialize the synthetic dataset Y by randomly sampling from a standard Gaussian distribution... We use Adam... to optimize Y..." but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 5. Comparing similarity scores across studies is challenging, primarily due to variability in naming and implementation conventions. As part of our contribution to the research community we have created, and are continuing to develop, a Python package that benchmarks and standardizes similarity measures. Currently there are approximately 100 different similarity measures from 14 packages. Similarity package: https://github.com/nacloos/similarity-repository |
| Open Datasets | Yes | Mante et al. (2013): Prefrontal cortex (PFC) electrode recordings during a contextual decisionmaking task involving colored moving dots. ...Data link1. Hatsopoulos et al. (2007): Primary motor (M1) electrode recordings during a center-out reaching task. ...Data link2. https://datadryad.org/stash/dataset/doi:10.5061/dryad.xsj3tx9cm We use the Brain Score3 library (Schrimpf et al., 2020) for the Majaj et al. (2015); Freeman et al. (2013) datasets. |
| Dataset Splits | Yes | We use the R2 coefficient to evaluate goodness of fit and ridge regularization as well as 5-fold cross-validation that tests generalization across different experimental conditions. This decoding analysis employs logistic regression with stratified 5-fold cross-validation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running the experiments or computations. It focuses on the methodology and results without detailing the computational infrastructure. |
| Software Dependencies | No | We use Adam (Kingma & Ba, 2017) to optimize Y to maximize the similarity score with X, leveraging the differentiability of the similarity measures, and stop the optimization when the score reaches a fixed threshold near 1. Note that some similarity measures have parameters to optimize to compute the similarity score. Our method can be applied in such cases too, as long as the similarly score is differentiable with respect to the input datasets. For example, in the case of linear regression, we directly differentiate Py Torch s lstsq function. |
| Experiment Setup | Yes | To better characterize similarity measures we optimize synthetic datasets Y to become more similar to a reference dataset X. We initialize the synthetic dataset Y by randomly sampling from a standard Gaussian distribution with the same shape as X. We use Adam (Kingma & Ba, 2017) to optimize Y to maximize the similarity score with X, leveraging the differentiability of the similarity measures, and stop the optimization when the score reaches a fixed threshold near 1. Note that some similarity measures have parameters to optimize to compute the similarity score. Our method can be applied in such cases too, as long as the similarly score is differentiable with respect to the input datasets. For example, in the case of linear regression, we directly differentiate Py Torch s lstsq function. |