Getting aligned on representational alignment
Authors: Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Christopher J Cueva, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine Hermann, Kerem Oktar, Klaus Greff, Martin N Hebart, Nathan Cloos, Nikolaus Kriegeskorte, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas O'Connell, Thomas Unterthiner, Andrew Kyle Lampinen, Klaus Robert Muller, Mariya Toneva, Thomas L. Griffiths
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning. Despite their overlapping interests, there is limited knowledge transfer between these fields, so work in one field ends up duplicated in another, and useful innovations are not shared effectively. To improve communication, we propose a unifying framework that can serve as a common language for research on representational alignment, and map several streams of existing work across fields within our framework. We also lay out open problems in representational alignment where progress can benefit all three of these fields. |
| Researcher Affiliation | Collaboration | 1Princeton University 2NYU Center for Data Science 3TU Berlin 4BIFOLD 5Google Deep Mind 6Max Planck Institute for Human Cognitive and Brain Sciences 7University of Cambridge 8MIT 9UCL 10University of Amsterdam 11Columbia University 12Cornell University 13Google Research 14Anthropic 15Harvard University 16Korea University 17Max Planck Institute for Informatics 18Max Planck Institute for Software Systems 19University of Oxford 20Los Alamos National Laboratory 21The Alan Turing Institute 22Justus Liebig University Giessen |
| Pseudocode | No | The paper describes a unifying framework for representational alignment research and reviews existing literature. It does not present any novel algorithms or procedures in a pseudocode or algorithm block format. |
| Open Source Code | No | The paper references external code, such as a Python package by Cloos et al. (2024a) for benchmarking similarity measures: "To facilitate comparisons across different studies and make explicit the implementation choices underlying a given code repository Cloos et al. (2024a) created, and are continuing to develop, a Python package that benchmarks and standardizes similarity measures." However, it does not provide source code for the framework or original methodology presented in this paper. |
| Open Datasets | Yes | Hebart et al. (2020) collected 1.46 million human triplet odd-one-out judgments... Visual Genome dataset (Krishna et al., 2017)... The Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured... (Allen et al., 2022). |
| Dataset Splits | No | This paper is a perspective and review. It proposes a framework for representational alignment and summarizes existing work, but it does not conduct its own experiments or present new data, and therefore does not define dataset splits for its own work. |
| Hardware Specification | No | This paper is a perspective and review. It proposes a framework for representational alignment and summarizes existing work, but it does not conduct its own experiments and therefore does not specify hardware used for running experiments. |
| Software Dependencies | No | This paper is a perspective and review. It proposes a framework for representational alignment and summarizes existing work, but it does not conduct its own experiments and therefore does not list specific software dependencies with version numbers for its own implementation. |
| Experiment Setup | No | This paper is a perspective and review. It proposes a framework for representational alignment and summarizes existing work, but it does not conduct its own experiments and therefore does not provide experimental setup details or hyperparameters. |