BrainOOD: Out-of-distribution Generalizable Brain Network Analysis

Authors: Jiaxing Xu, Yongqiang Chen, Xia Dong, Mengcheng Lan, Tiancheng HUANG, Qingtian Bian, James Cheng, Yiping Ke

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach outperforms 16 existing methods and improves generalization to OOD subjects by up to 8.5%. Case studies highlight the scientific validity of the patterns extracted, which aligns with the findings in known neuroscience literature. We also propose the first OOD brain network benchmark, which provides a foundation for future research in this field. Our code is available at https://github.com/Angus Monroe/Brain OOD. ... 5 EXPERIMENTAL RESULTS We first compare Brain OOD with existing baselines in terms of in-domain (ID) and OOD classification accuracy. The results on 2 brain network datasets over 10-fold cross-validation (CV) are reported in Table 2. ... 5.4 ABLATION STUDY To verify the effectiveness of our proposed components in Brain OOD, we test our design of the loss functions by disabling them one by one. The results are reported in Table 4
Researcher Affiliation Academia 1College of Computing and Data Science, Nanyang Technological University 2Department of Computer Science and Engineering, The Chinese University of Hong Kong 3S-Lab, Nanyang Technological University EMAIL; EMAIL; EMAIL
Pseudocode No The paper describes methods using mathematical equations and textual descriptions, but there are no explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Our code is available at https://github.com/Angus Monroe/Brain OOD.
Open Datasets Yes To investigate this OOD shift, we use two widely-studied, multi-site brain network datasets: ABIDE (Craddock et al., 2013), focused on Autism Spectrum Disorder (ASD), and ADNI (Dadi et al., 2019), centered around Alzheimer s Disease (AD).
Dataset Splits Yes To simulate an OOD setting, we adopt a site-holdout strategy: each dataset is split into training, validation, and test sets in an 8:1:1 ratio. Importantly, the validation/test set is composed entirely of subjects from one specific site that were not present in the training set, making them OOD samples relative to the training data. ... For model evaluation, we use a consistent random seed across all experiments and perform 10-fold cross-validation. The average accuracy across folds is reported to ensure robustness in the results, allowing us to fairly compare models generalization performance under OOD conditions.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory amounts) used for running the experiments are mentioned in the paper.
Software Dependencies No Section 5.1 mentions "scikit-learn (Pedregosa et al., 2011)" and Section 5.3 mentions visualization using "Nilearn toolbox (Abraham et al., 2014)". However, no specific version numbers are provided for these or any other software components.
Experiment Setup Yes The detailed baseline description and implementation of these experiments are provided in Appendices D.1 and D.2, respectively. ... Appendix D.2: Experimental Setup: We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.001 and a weight decay of 0.0001 for training. The batch size is set to 32. All models are trained for 100 epochs, and we select the best model based on the validation set performance. We use Xavier initialization for all weights and set the dropout rate to 0.5. We use 2 layers GIN as the backbone for graph OOD methods. The hyperparameters for Brain OOD are φ1 = 0.01, φ2 = 1.0, φ3 = 0.5, ϱ = 0.5 and ω = 0.5.