MindSimulator: Exploring Brain Concept Localization via Synthetic fMRI

Authors: Qi Zhang, Zixuan Gong, Zhuojia Wu, Duoqian Miao

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By synthesizing extensive brain activity recordings, we statistically localize various concept-selective regions. Our proposed Mind Simulator leverages advanced generative technologies... Using the synthetic recordings, we successfully localize several wellstudied concept-selective regions and validate them against empirical findings, achieving promising prediction accuracy. Section 4: EXPERIMENTS SETUP. Section 5: EVALUATION FOR SYNTHETIC FMRI.
Researcher Affiliation Academia Guangyin Bao1, Qi Zhang1, Zixuan Gong1, Zhuojia Wu1, Duoqian Miao1. 1Tongji University
Pseudocode No The paper describes the methodology and model architecture using text and mathematical equations, and provides an overview diagram in Figure 1, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No All preprocessed data, code, and model parameters used in our research will be made publicly available upon publication.
Open Datasets Yes We use the Natural Scenes Dataset (NSD) (Allen et al., 2022), which is an extensive whole-brain f MRI dataset... MSCOCO (Lin et al., 2014)... CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009)
Dataset Splits Yes The 9,000 unique images for each subject are used for training and the remaining 1,000 shared images are used for evaluation. During the training phase, all three f MRI of the same image are used individually; while for testing, three repeats are averaged.
Hardware Specification Yes All components of our Mind Simulator can be trained using a single NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions using Adam W (Loshchilov, 2017) and pre-trained CLIP Vi T, but does not provide specific version numbers for general software libraries or programming languages like Python or PyTorch.
Experiment Setup Yes We trained the f MRI autoencoder end-to-end for 300 epochs, using Adam W (Loshchilov, 2017) with a cycle learning rate schedule starting from 3e-4. For the diffusion estimator, we set the timesteps T to 100, adopting a cosine noise schedule and 0.2 conditions drop. We train it for 150 epochs using gradient clipping, with the same learning rate as our autoencoder. For hyperparameter β, we randomly sample from U(0, 1).