Multi-BK-Net: Multi-Branch Multi-Kernel Convolutional Neural Networks for Clinical EEG Analysis

Authors: Ann-Kathrin Kiessner, Tonio Ball, Joschka Boedecker

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation is based on two public datasets with predefined test sets: the Temple University Hospital (TUH) Abnormal EEG Corpus and the TUH Abnormal Expansion Balanced EEG Corpus. Our Multi-BK-Net outperforms five baseline architectures and state-of-the-art end-to-end approaches in terms of accuracy and sensitivity on these datasets, setting a new benchmark. Furthermore, ablation experiments highlight the importance of the multi-branch, multi-scale input block of the Multi-BK-Net.
Researcher Affiliation Academia Ann-Kathrin Kiessner EMAIL Department of Computer Science & IMBIT//Brain Links-Brain Tools, University of Freiburg Joschka Boedecker EMAIL Department of Computer Science & IMBIT//Brain Links-Brain Tools, University of Freiburg & Collaborative Research Institute Intelligent Oncology (CRIION) Tonio Ball EMAIL Neuromedical AI Lab, Medical Centre Freiburg & IMBIT//Brain Links-Brain Tools, University of Freiburg
Pseudocode No The paper describes the Multi-BK-Net architecture in Section 2.1 and provides a schematic in Figure 1, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code of this study is available at https: //github.com/nrgrp/Multi-BK-Net-general-EEG-pathology-classification.git.
Open Datasets Yes Evaluation is based on two public datasets with predefined test sets: the Temple University Hospital (TUH) Abnormal EEG Corpus and the TUH Abnormal Expansion Balanced EEG Corpus. Our Multi-BK-Net outperforms five baseline architectures and state-of-the-art end-to-end approaches in terms of accuracy and sensitivity on these datasets, setting a new benchmark. Furthermore, ablation experiments highlight the importance of the multi-branch, multi-scale input block of the Multi-BK-Net.
Dataset Splits Yes Evaluation is based on two public datasets with predefined test sets... The TUAB consists of 2,993 recordings (49.18% pathological) from 2,329 patients (52.09% female, mean age: 48.55 17.86 years) that are divided into a predefined training set (2,717 recordings) and an evaluation set (276 recordings). In contrast, the TUABEXB contains 8,879 recordings (49.75% pathological) obtained from 7,006 patients (mean age: 47.7 21.2 years; 51.7% female) and is divided into a predefined training set (7990 recordings) and an evaluation set (889 recordings). For training, we concatenated the TUAB and TUABEXB training sets, which we refer to as the TUH Abnormal Combined EEG Corpus (TUABCOMB). The hyperparameters of the Multi-BK-Net were optimised using multivariate tree-structured Parzen estimators (TPE) (Bergstra et al., 2011; 2013) from the Optuna library (Akiba et al., 2019) with respect to the mean validation accuracy and mean validation sensitivity values from a 5-fold cross-validation.
Hardware Specification No The paper mentions that "Experiments were conducted with a time budget of 45 hours per fold for each configuration run on a single fold," but it does not specify any particular hardware components like CPU or GPU models.
Software Dependencies No We implemented our model in Braindecode (BD), an open-source Python toolbox for decoding raw electrophysiological brain data with deep learning models (Schirrmeister et al., 2017b). The hyperparameters of the Multi-BK-Net were optimised using multivariate tree-structured Parzen estimators (TPE) (Bergstra et al., 2011; 2013) from the Optuna library (Akiba et al., 2019)... For each set of hyperparameters, we performed 5-fold cross-validation on the TUABCOMB training data using Stratified Group KFold from the Scikit-learn library (Pedregosa et al., 2011)... While several software tools are mentioned, specific version numbers for these components (e.g., Python version, Braindecode version, Optuna version, Scikit-learn version) are not provided.
Experiment Setup Yes Table 1: Hyperparameters of the Multi-BK-Net architecture. Hyperparameters have been optimised in preliminary experiments; more details on the hyperparameter optimisation and design choices are provided in Appendix A.3. Hyperparameter Selected value Total number of temporal convolution filters 35 (7 filters per branch) Normalisation Group Norm Activation functions GELU Pooling mode first block Mean Pooling mode remaining blocks Mean Forth conv-pooling-block True Forth conv-pooling-block broader True Pool length 3 Pool stride 3 Stride before pool True Dropout 0.502959339666169 Filter length convolution blocks 20 Input window size 6000 Weighted loss factor pathological 1 Optimizer Adam W Optimizer beta1 0.5 Learning rate 0.0031414364096615 Weight decay 1.8397405899531204e-05 Batch size 64 Number of epochs 42 Number of channels 21