Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding

Authors: Guangyin Bao, Qi Zhang, Zixuan Gong, Jialei Zhou, Wei Fan, Kun Yi, Usman Naseem, Liang Hu, Duoqian Miao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance.
Researcher Affiliation Academia 1Tongji University 2University of Oxford 3North China Institute of Computing Technology 4State Information Center of China 5Macquarie University EMAIL
Pseudocode No The paper describes methods using mathematical formulations (e.g., Equation 1-9) and prose, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper mentions "please refer to our appendix1. 1https://arxiv.org/abs/2404.13282" which links to the arXiv preprint of the paper itself, not a specific code repository. There is no explicit statement about releasing code for the methodology described.
Open Datasets Yes We conducted a comprehensive evaluation on the Natural Scene Dataset (NSD) (Allen et al. 2022).
Dataset Splits Yes Our experiments involve the classification and retrieval on NSD. We employ the few-shot setting for a given subject while the other subjects use the entire f MRI data for training. The few-shot ratios are set to be 0.05, 0.1, and 0.2, corresponding to 1, 2, and 4 sessions of f MRI.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running the experiments. It only mentions a '7T fMRI dataset', referring to the fMRI scanner, not the computational hardware.
Software Dependencies No The paper does not provide specific software dependencies or library versions (e.g., Python, PyTorch, TensorFlow versions) used in the implementation.
Experiment Setup No The paper states that "The implementation details follow CLIP-MUSED (Zhou et al. 2024)" for the classification task and mentions "α is the factor to balance two losses" but does not provide specific values for hyperparameters such as learning rate, batch size, number of epochs, or other detailed training configurations within the main text.