Learnability of Linear Port-Hamiltonian Systems

Authors: Juan-Pablo Ortega, Daiying Yin

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper includes a section '7 Numerical illustrations' where it states: 'In this section, we present two numerical examples to demonstrate the effectiveness of our representation results from a learning point of view.' It also includes figures showing results and discussing training and testing data.
Researcher Affiliation Academia Both authors are affiliated with 'Nanyang Technological University, Singapore', which is an academic institution.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It focuses on mathematical derivations and numerical examples.
Open Source Code Yes For the reader s convenience, the Python code necessary to reproduce these numerics is public and can be found in https: //github.com/YINDAIYING/Learnability-of-Linear-Port-Hamiltonian-Systems.
Open Datasets No The paper uses data generated from simulated systems ('Non-dissipative circuit' and 'Positive definite Frenkel-Kontorova model'). For example, in Section 7.1 it states: 'We randomly generate an initial condition for the ground-truth system and integrate it using Euler s method... The 1000 pairs of input and output data will be used as training data.' This indicates self-generated data, not a publicly available dataset with concrete access information.
Dataset Splits Yes In Section 7.1, it states: 'The 1000 pairs of input and output data will be used as training data. We set a testing period of 4000 time steps'. In Section 7.2, it states: 'The 1000 pairs of input and output data are then used as training data.'
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments. It only mentions 'Python code' in relation to the numerical illustrations.
Software Dependencies No The paper mentions 'Python code' but does not specify any software names with version numbers (e.g., Python version, specific libraries like NumPy, SciPy, or PyTorch versions).
Experiment Setup Yes In Section 7.1, it states: 'This is carried out via gradient descent using a learning rate of λ = 0.1 for 500 epochs.' In Section 7.2, it states: 'we carry out the training using gradient descent with a learning rate of λ = 0.02 over 1500 epochs out of randomly chosen initial values for the initial state condition and the model parameters'.