Benchmarking Continuous Time Models for Predicting Multiple Sclerosis Progression

Authors: Alexander Luke Ian Norcliffe, Lev Proleev, Diana Mincu, F Lee Hartsell, Katherine A Heller, Subhrajit Roy

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark four continuous time models using a publicly available multiple sclerosis dataset. We find that the best continuous model is often able to outperform the best benchmarked discrete time model. We also carry out an extensive ablation to discover the sources of performance gains, we find that standardizing existing features leads to a larger performance increase than interpolating missing features.
Researcher Affiliation Collaboration Alexander Norcliffe EMAIL University of Cambridge Lev Proleev EMAIL Google Research Diana Mincu EMAIL Google Research Fletcher Lee Hartsell EMAIL Duke University Health System Katherine Heller EMAIL Google Research Subhrajit Roy EMAIL Google Research
Pseudocode No The paper describes mathematical models and processes with equations and prose. It does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured code-like blocks detailing procedures.
Open Source Code No Our code does not exist in isolation but as part of a larger code base containing proprietary code. As such our code is not publicly available at this time, we plan to open-source it in the future.
Open Datasets Yes Datasets. We use a publicly available dataset in this work: Multiple Sclerosis Outcome Assessments Consortium (MSOAC) (Rudick et al., 2014) (https://c-path.org/programs/msoac/).
Dataset Splits Yes Hyperparameters. We found hyperparameters using a grid search and 10 fold cross-validation.
Hardware Specification No The paper mentions that models and experiments were implemented in TensorFlow and Keras. However, it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No All models are implemented using TensorFlow (Abadi et al., 2015) and Keras (Chollet et al., 2015) using the TensorFlow ODE solver https://www.tensorflow.org/probability/api_docs/python/tfp/math/ode. The paper mentions TensorFlow and Keras but does not specify their version numbers, nor the version of the TensorFlow ODE solver used.
Experiment Setup Yes Hyperparameters. We found hyperparameters using a grid search and 10 fold cross-validation. The final hyperparameter configurations and values tested are given in Appendix B... Models are trained for 50 epochs with the Adam optimizer (Kingma & Ba, 2015), learning rates and batchsizes are hyperparameters given in Appendix B. (Tables 5-10 in Appendix B provide specific hyperparameter values).