Learning Physics Informed Neural ODEs with Partial Measurements

Authors: Paul Ghanem, Ahmet Demirkaya, Tales Imbiriba, Alireza Ramezani, Zachary Danziger, Deniz Erdogmus

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of the proposed approach is assessed in comparison to state-of-the-art model learning methods on several challenging nonlinear simulations and real-world datasets. The benchmark results are summarized in Table 1 which represents normalized Root Mean Square Error (n RMSE) values for each model and method.
Researcher Affiliation Academia 1Northeastern University, Boston Massachusetts 2University of Massachusetts, Boston Massachusetts 3Emory University, Atlanta Georgia
Pseudocode No The paper describes the method using mathematical equations and derivations in Section 4, but does not present an explicit pseudocode block or algorithm.
Open Source Code No The paper does not contain an explicit statement about the availability of the authors' source code for the methodology described.
Open Datasets Yes We demonstrate the performance of the proposed approach leveraging numerical simulations and a real dataset extracted from an electro-mechanical positioning system. Here we evaluate the proposed approach on real data from an electro-mechanical positioning system described in (Janot, Gautier, and Brunot 2019).
Dataset Splits Yes For this, we generate data DT with N = 50, 000 samples using the HH model with different initial conditions from the ones used during training. From this data, we reserve the first 100 samples for learning the initial condition before performing integration for the remaining 49, 900 samples.
Hardware Specification No The paper discusses computational complexity and experiment results but does not provide specific details on the hardware used (e.g., GPU/CPU models, memory) for running the experiments.
Software Dependencies No The paper mentions using an 'Euler integrator as the ODE solver' and discusses neural network architectures (feed-forward, LSTM) and algorithms (LQR), but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We train our model on D with Px0 = 10 2Idx, Pθ0 = 102Idθ Ry = 10 10Idy, Qx = 10 5Idx and Qθ = 10 2Idθ. At the beginning of each epoch, we solve the problem (66) of the Appendix D to get the initial condition. The first layer is a 20 units layer followed by an Exponential Linear Unit (ELU) activation function, the second layer is also a 20 unit layer followed by a tanh activation function. The last layer consists of 10 units with a sigmoid activation function.