Sample-efficient decoding of visual stimuli from fMRI through inter-individual functional alignment
Authors: Alexis Thual, Yohann Benchetrit, Felix Geilert, Jérémy Rapin, Iurii Makarov, Stanislas Dehaene, Bertrand Thirion, Hubert Banville, Jean-Remi King
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluated on a retrieval task, compared to the anatomically-aligned baseline, our method halves the median rank in out-of-subject setups in low-data regimes. It also outperforms classical within-subject approaches when fewer than 100 minutes of data is available for the tested participant. |
| Researcher Affiliation | Collaboration | Alexis Thual EMAIL Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Neuro Spin center, Gif sur Yvette, France Mind, Inria Paris-Saclay, Palaiseau, France Inserm, Collège de France, Paris, France Yohann Benchetrit Meta AI Félix Geilert Meta AI Jérémy Rapin Meta AI Iurii Makarov Meta AI Stanislas Dehaene Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Neuro Spin center, Gif sur Yvette, France Inserm, Collège de France, Paris, France Bertrand Thirion Mind, Inria Paris-Saclay, Palaiseau, France Hubert Banville Meta AI Jean-Rémi King EMAIL Meta AI Laboratoire des systèmes perceptifs, École normale supérieure PSL University |
| Pseudocode | No | The paper describes methods like Fused Unbalanced Gromov-Wasserstein (FUGW) and a block coordinate descent algorithm with Sinkhorn iterations, and presents a loss function in Equation 1. However, it does not provide these procedures in a structured pseudocode or algorithm block format with numbered steps. |
| Open Source Code | No | The paper mentions a link for FUGW (https://alexisthual.github.io/fugw) which is a third-party tool used, but does not provide specific access to the source code for the methodology described in this paper itself. There is no explicit statement about releasing the code for their own work. |
| Open Datasets | Yes | We analyze two f MRI datasets. The first dataset (Wen et al., 2017) comprises 3 human participants... The second dataset (Allen et al., 2021) denoted as the Natural Scenes Dataset (NSD) comprises 8 participants... precomputed per-trial regression coefficients accessible online. |
| Dataset Splits | Yes | The first dataset (Wen et al., 2017) comprises 3 human participants who watched 688 minutes of video... It amounts to 8640 training samples and 1200 test samples per individual. ... The second dataset (Allen et al., 2021)... For each selected participant, we split their 30 000 trials in two sets: all exclusive images are grouped in the decoding set and all shared images are grouped in the alignment set. We further split the decoding set into disjoint sets of images for training and testing individual decoders |
| Hardware Specification | No | The f MRI data was acquired at 3T, 3.5mm isotropic spatial resolution and 2-second temporal resolution. This refers to data acquisition hardware, not the computational hardware used for experiments. There is no mention of specific CPU/GPU models or other computational hardware. |
| Software Dependencies | Yes | Functional alignment On top of the aforementioned anatomical alignment, we apply a recent method from Thual et al. (2022) denoted as Fused Unbalanced Gromov-Wasserstein (FUGW) 2. ...We use default parameters shipped with version 0.1.0 of FUGW. ... The first two steps are implemented with Nilearn (Abraham et al., 2014) and the last one with Scikit-Learn (Pedregosa et al., 2011). |
| Experiment Setup | Yes | To train decoders, we use the same regularization coefficient αridge across latent types and choose it by running a cross-validated grid search on folds of the training data. We find that results are robust to using different values and therefore set αridge = 50 000. Similarly, values for lag, window size and aggregation function are determined through a cross-validated grid search. ... Namely, α, which controls the balance between Wasserstein and Gromov-Wasserstein losses ... is set to 0.5. Secondly, ρ, which sets the importance of marginal constraints ... is set to 1. Finally, ε, which controls for entropic regularization ... is set to 10^-4. |