Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Learning Energy Conserving Dynamics Efficiently with Hamiltonian Gaussian Processes

Authors: Magnus Ross, Markus Heinonen

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the method s success in learning Hamiltonian systems in various data settings. ... In this section we provide an experimental evaluation of our method for a variety of Hamiltonian systems. The code for our implementation of the model is available at https://github.com/magnusross/hgp.
Researcher Affiliation Academia Magnus Ross EMAIL University of Manchester Markus Heinonen EMAIL Aalto University
Pseudocode No The paper describes algorithms and methods in text and mathematical formulations but does not include any clearly labeled pseudocode blocks or algorithms.
Open Source Code Yes The code for our implementation of the model is available at https://github.com/magnusross/hgp.
Open Datasets No We evaluate our model on three true Hamiltonian systems: the fixed pendulum (FP), the spring pendulum (SP) (Lynch, 2000), and the Henon-Heiles (HH) system (Henon & Heiles, 1964). ... For each experiment we sample different initial conditions from the phase space of the system to generate data for each repeat, using the same data across each model we test.
Dataset Splits Yes Task 1: Trajectory forecasting. ... The model learns the system from [0, T] interval, and is tasked to forecast the trajectory forward for [T, 2T]. ... Task 2: Initial condition extrapolation. ... The test set consists of 25 trajectories sampled from phase space using the same procedure, with length triple that of the training trajectories.
Hardware Specification Yes All experiments were run on a Mac Book Pro (14-inch, 2021) laptop with M1 Pro chip and 32 GB memory, using the CPU.
Software Dependencies No We use the torchdiffeq package (Chen, 2018) with implicit dopri5 solver. ... We implement the HGP, and baseline models in Python, with the Py Torch framework (Paszke et al., 2019). ... In practice we estimate derivatives using the gradient function, from the numpy package(Harris et al., 2020).
Experiment Setup Yes We use M = 48 inducing points for task 1, and M = 128 for task 2. Throughout we use S = 256 basis functions, and run optimisation for 2500 iterations using the Adam optimiser (Kingma & Ba, 2015) with learning rate 3e 3. During training we use a single sample from the model to estimate the intractable expectations, when making predictions we use 32 samples. Unless otherwise stated, we use the shooting approximation with a single shooting state per 4 data points, i.e. L = N/4 . We use the torchdiffeq package (Chen, 2018) with implicit dopri5 solver. ... For both the HNN and the NODE we use 3 hidden layers of size 256, with tanh activation. ... We use the Adam optimiser with learning rate 3 10 3. For the experiments on task 1 we use a batch size of 16 and for those on task 2 we use a batch size of 32. ... We fix the shooting constraint variance as Οƒ2 ΞΎ = 1 10 6, and the energy constraint variance to Οƒ2 Ο‡ = 2.5 10 3.