Biologically Plausible Brain Graph Transformer

Authors: Ciyuan Peng, Yuelong Huang, Qichao Dong, Shuo Yu, Feng Xia, Chengqi Zhang, Yaochu Jin

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three benchmark datasets demonstrate that Bio BGT outperforms state-of-the-art models, enhancing biologically plausible brain graph representations for various brain graph analytical tasks.
Researcher Affiliation Academia 1Federation University Australia, 2Dalian University of Technology, 3Zhejiang Gongshang University, 4RMIT University, 5Hong Kong Polytechnic University, 6Westlake University
Pseudocode No The paper describes its methodology using mathematical formulations and descriptive text, such as in Sections 3.1 and 3.2, but does not include a distinct pseudocode or algorithm block.
Open Source Code Yes Our code is available at https://github.com/pcyyyy/BioBGT.
Open Datasets Yes Datasets. We conduct experiments on f MRI data collected from three benchmark datasets. (1) Autism Brain Imaging Data Exchange (ABIDE) 3 dataset. This dataset contains resting-state f MRI data of 1, 009 anonymous subjects... 3https://fcon_1000.projects.nitrc.org/indi/abide/ (2) Alzheimer s Disease Neuroimaging Initiative (ADNI) 4 dataset... 4https://adni.loni.usc.edu/ (3) Attention Deficit Hyperactivity Disorder (ADHD-200) 5 dataset... 5https://fcon_1000.projects.nitrc.org/indi/adhd200/
Dataset Splits Yes Each dataset is randomly split, with 80% used for training, 10% for validation, and 10% for testing.
Hardware Specification Yes Model training is performed on an NVIDIA A6000 GPU with 48GB of memory.
Software Dependencies Yes Our model is implemented using PyTorch Geometric v2.0.4 and PyTorch v1.9.1.
Experiment Setup Yes The detailed hyperparameter settings for training Bio BGT on three datasets are summarized in Table 3. (Table 3 lists: #Layers 3, #Attention heads 8, Threshold of edge weight 0.3 0 0, Hidden dimensions 128, FFN hidden dimensions 256, Dropout 0.5 0.1 0.1, Readout method mean, Learning rate 3e-4, Batch size 128, #Epochs 200, Weight decay 1e-4, Warm-up Steps 10)