Timewarp: Transferable Acceleration of Molecular Dynamics by Learning Time-Coarsened Dynamics
Authors: Leon Klein, Andrew Foong, Tor Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank Noe, Ryota Tomioka
NeurIPS 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Timewarp on small peptide systems. To compare with MD, we focus on the slowest transitions between metastable states, as these are the most difficult to traverse. |
| Researcher Affiliation | Collaboration | Leon Klein Freie Universität Berlin EMAIL Andrew Y. K. Foong Microsoft Research AI4Science EMAIL Tor Erlend Fjelde University of Cambridge EMAIL Bruno Mlodozeniec University of Cambridge EMAIL Marc Brockschmidt Sebastian Nowozin Frank Noé Microsoft Research AI4Science Freie Universität Berlin Rice University EMAIL Ryota Tomioka Microsoft Research AI4Science EMAIL |
| Pseudocode | Yes | Pseudocode for the MCMC algorithm is given in Algorithm 1 in Appendix C. Pseudocode is given in Algorithm 2 in Appendix D. |
| Open Source Code | Yes | The code is available here: https://github.com/microsoft/timewarp. |
| Open Datasets | No | The datasets are available upon request3. 3Please contact EMAIL for dataset access. |
| Dataset Splits | No | For 2AA and 4AA, we train on a randomly selected trainset of short trajectories (50ns = 108 steps), and evaluate on unseen test peptides. |
| Hardware Specification | Yes | The training was performed on 4 NVIDIA A-100 GPUs for the 2AA and 4AA datasets and on a single NVIDIA A-100 GPU for the AD dataset. Inference with the model as well as all MD simulations were conducted on single NVIDIA V-100 GPUs for AD and 2AA, and on single NVIDIA A-100 GPUs for 4AA. |
| Software Dependencies | No | The paper mentions using 'Open MM library' and 'Deep Speed library' but does not specify their version numbers, which are required for a reproducible description of ancillary software. |
| Experiment Setup | Yes | For all MD simulations we use the parameters shown in Table 1. ... We use a weighted sum of the losses with weights detailed in Table 5. We use the Fused Lamb optimizer and the Deep Speed library [34] for all experiments. The batch size as well as the training times are reported in Table 6. All simulations are started with a learning rate of 5 10 4, the learning rate is then consecutively decreased by a factor of 2 upon hitting training loss plateaus. |