BrainUICL: An Unsupervised Individual Continual Learning Framework for EEG Applications
Authors: Yangxuan Zhou, Sha Zhao, Jiquan Wang, Haiteng Jiang, Shijian Li, Tao Li, Gang Pan
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of the proposed Brain UICL has been evaluated on three different mainstream EEG tasks. The Brain UICL can effectively balance both the plasticity and stability during CL, achieving better plasticity on new individuals and better stability across all the unseen individuals, which holds significance in a practical setting. We have conducted our Brain UICL framework on three different downstream EEG tasks shown in Tab. 2. Specifically, for i-th incremental individual, we compute its personal performance through the same model at three different temporal states (i.e., M0, Mi 1, Mi). After each adaptation, we measure the latest model s stability on generalization set. Here, M0 denotes the initial model. Mi 1 and Mi represent the incremental model before and after adapting to the i-th individual, respectively. MNT denotes the final model after continual adaptation to all incremental individuals. The results demonstrate that our method can achieve both the better plasticity and stability. |
| Researcher Affiliation | Academia | Yangxuan Zhou1,2, Sha Zhao1,2 , Jiquan Wang1,2, Haiteng Jiang3,4,1, Shijian Li1,2, Tao Li3,4,1, Gang Pan1,2,4 1State Key Laboratory of Brain-machine Intelligence, Zhejiang University 2College of Computer Science and Technology, Zhejiang University 3Department of Neurobiology, Affiliated Mental Health Center & Hangzhou Seventh People s Hospital, Zhejiang University School of Medicine 4MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University EMAIL; EMAIL; |
| Pseudocode | Yes | Algorithm 1: UICL Algorithm Input: {X i S, YS i} NS i=1, {X i T } NT i=1, {X i G, Yi G} NG i=1 Output: M Incremental Model Pretraining: Pretrain the model M0 using the source data X i S, Yi S. Unsupervised Individual Continual Learning: for i 1 to NT do |
| Open Source Code | Yes | The source code is available at https://github.com/xiaobaben/Brain UICL. |
| Open Datasets | Yes | As shown in Tab. 1, we employ three mainstream EEG tasks for evaluation: sleep staging, emotion recognition and motor imagery. Specifically, for each EEG task, we conduct our study using a publicly available dataset, namely ISRUC (Khalighi et al., 2016), FACED (Chen et al., 2023), and Physionet-MI (Schalk et al., 2004), respectively. |
| Dataset Splits | Yes | Based on our UICL setting, each dataset is divided into three parts: pretraining, incremental and generalization sets, with a ratio of 3:5:2. The pretraining set is used to pretrain the initial incremental model M0. The incremental set (i.e., continual individual flow) is used for individual continual domain adaptation and for evaluating the model s plasticity. During this step, the incremental model needs to continuously adapt to each unseen individual one by one. The generalization set is used to evaluate the model s stability after each round of incremental individual adaptation is completed. The detailed UICL processes are listed in the Appendix. D Fig. 8. |
| Hardware Specification | No | The paper does not explicitly state the specific hardware (e.g., GPU models, CPU types, memory) used for running the experiments. It discusses computational cost but without hardware details. |
| Software Dependencies | No | Table 8 lists 'Adam W' as an optimizer and mentions 'Transformer' layers, but does not provide specific version numbers for any software libraries, frameworks (like PyTorch or TensorFlow), or programming languages used. |
| Experiment Setup | Yes | For incremental model pretraining, we set the number of training epoch to 100 and the learning rate is set to 1e-4. For the SSL training and the subsequent fine-tuning, we both set the epoch to 10. The default learning rate for these two process are set to 1e-6 and 1e-7, respectively. Table 8: Hyper-parameters of the proposed Brain UICL. For Conv1D, the parameters from left to right are: (filter, kernel_size, and stride). Pre-training: Epoch 100, Learning Rate 1e-4, Adam W β1 0.5, Adam W β2 0.99, Adam W Weight Decay 3e-4, Batch 32. Transformer: Attention Head 8, Attention Dim 512, Attention Layer 3, Dropout 0.1. Self-supervised Learning: Epoch 10, Learning Rate 1e-6. Continual Adaptation: Epoch 10, Learning Rate 1e-7, Confident Threshold ξ1 0.9, Confident Threshold ξ2 0.9, Alignment Interval 2. |