Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Temporally Rich Deep Learning Models for Magnetoencephalography
Authors: Tim Chard, Mark Dras, Paul Sowman, Steve Cassidy, Jia Wu
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose more complex NN models that focus on modelling temporal relationships in the data, and apply them to the challenges of MEG data. We apply these models to an extended range of MEG-based tasks, and find that they substantially outperform existing work on a range of tasks, particularly but not exclusively temporally-oriented ones. We also show that an autoencoder-based preprocessing component that focuses on the temporal aspect of the data can improve the performance of existing models. |
| Researcher Affiliation | Academia | Tim Chard EMAIL School of Computing Macquarie University Mark Dras EMAIL School of Computing Macquarie University Paul Sowman EMAIL School of Psychological Sciences Macquarie University Steve Cassidy EMAIL School of Computing Macquarie University Jia Wu EMAIL School of Computing Macquarie University |
| Pseudocode | No | The paper describes model architectures and their components, such as 'Time Conv', 'Temporal Residual Block', 'SERes', 'SETra', and 'Time Autoencoder', detailing their structure and operations. This is often accompanied by architectural diagrams (e.g., Figure 1, Figure 3, Figure 4, Figure 5). However, there are no explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor are there any structured, step-by-step procedures formatted like code within the main text. |
| Open Source Code | Yes | Our source code is available at https://github.com/tim-chard/Deep Learning For MEG. |
| Open Datasets | Yes | 4.1.1 Cam-CAN dataset Cam-CAN is the largest MEG dataset that is available, consisting of more than 600 subjects (Shafto et al., 2014). [...] 4.1.2 Mother Of Unification Studies In addition to Cam-CAN, another large dataset has also been recently released, the Mother Of Unification Studies (MOUS) (Schoffelen et al., 2019)... |
| Dataset Splits | Yes | Specifically, we assign 60% of the subjects to the training set (388), 20% to the validation set (128), and the remaining 20% to the test set (128). In addition to the training, validation and test set, we also use a development set which is created by partitioning half the data (instead of subjects) from the validation set. |
| Hardware Specification | No | We trained each model on a GPU with a batch size of 128, using the Adam gradient descent optimization algorithm (Kingma & Ba, 2015) with a learning rate of 10 3 which optimized the cross-entropy loss of each model. |
| Software Dependencies | No | Our implementation of the residual connection is very similar to the Py Torch implementation (Paszke et al., 2019) and, ignoring the convolution that is being used, only differs in how batch normalization is applied. Our model includes four transformer layers as implemented by Py Torch (Paszke et al., 2019), each with an embedding size of 16 and a feedforward dimension of 64. |
| Experiment Setup | Yes | We trained each model on a GPU with a batch size of 128, using the Adam gradient descent optimization algorithm (Kingma & Ba, 2015) with a learning rate of 10 3 which optimized the cross-entropy loss of each model. We used early stopping on the validation loss with a patience of 3. [...] In addition, because of the increased memory requirements, we use a batch size of 32. To compensate for this we accumulate gradients across 4 batches to maintain an effective batch size of 128. |