BrainGuard: Privacy-Preserving Multisubject Image Reconstructions from Brain Activities
Authors: Zhibo Tian, Ruijie Quan, Fan Ma, Kun Zhan, Yi Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that BRAINGUARD sets a new benchmark in both high-level and low-level metrics, advancing the state-of-the-art in brain decoding through its innovative design. 4 Experiments Experimental Setup Datasets. The Natural Scenes Dataset (NSD) (Allen et al. 2022) encompasses f MRI obtained from eight participants... |
| Researcher Affiliation | Academia | 1School of Information Science and Engineering, Lanzhou University 2College of Computing and Data Science, Nanyang Technological University 3College of Computer Science and Technology, Zhejiang University EMAIL |
| Pseudocode | No | The paper only describes steps in regular paragraph text and mathematical formulas without structured formatting like pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the code, a link to a code repository, or mention of code in supplementary materials. |
| Open Datasets | Yes | The Natural Scenes Dataset (NSD) (Allen et al. 2022) encompasses f MRI obtained from eight participants, who were exposed to a total of 73,000 RGB images. This dataset has been extensively employed in numerous studies (Lin, Sprague, and Singh 2022; Chen et al. 2023; Takagi and Nishimoto 2023; Gu et al. 2023; Scotti et al. 2023) for the purpose of reconstructing images perceived during f MRI. |
| Dataset Splits | Yes | The training set for each subject comprises 8,859 image stimuli and 24,980 f MRI trials, while the test set includes 982 image stimuli and 2,770 f MRI trials. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using CLIP (Radford et al. 2021) embeddings and a SOTA diffusion model (Xu et al. 2023), but does not specify software libraries or their version numbers used for implementation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The individual model, denoted as fs, is optimized to predict the CLIP image embedding Is,n and the text embedding Ts,n based on the subject’s data. After each training round, the individual models transmit their updated parameters to a global model for aggregation...θ s = αθ s + (1 α)θs, where α is the EMA factor, typically set to 0.999. ...W m s W m s η L(θs; θg) W m s , (3) where η is the learning rate for weight learning. ...LMSE(P , Y ) = 1 B B i=1 (pi yi)2, (4) ...LSoft CLIP(P , Y ) = PB k=1 exp yi yk /τ PB k=1 exp pi yk /τ where τ denotes the temperature hyperparameter. |