Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations
Authors: Maziar Raissi
JMLR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers, Korteweg-de Vries (Kd V), Kuramoto-Sivashinsky, nonlinear Schr odinger, and Navier-Stokes equations. |
| Researcher Affiliation | Academia | Maziar Raissi EMAIL Division of Applied Mathematics Brown University Providence, RI, 02912, USA |
| Pseudocode | No | The paper describes the methodology for solving and discovering partial differential equations using neural networks and automatic differentiation in prose, explaining the steps involved in PINNs as solvers similar to equations (1), (2), and (3). It does not contain a formally structured pseudocode or algorithm block. |
| Open Source Code | Yes | All data and codes used in this manuscript are publicly available on Git Hub at https://github.com/maziarraissi/Deep HPMs. |
| Open Datasets | Yes | All data and codes used in this manuscript are publicly available on Git Hub at https://github.com/maziarraissi/Deep HPMs. |
| Dataset Splits | Yes | Out of this data-set, we generate a smaller training subset, scattered in space and time, by randomly sub-sampling 10000 data points from time t = 0 to t = 6.7. We call the portion of the domain from time t = 0 to t = 6.7 the training portion. The rest of the domain from time t = 6.7 to the final time t = 10 will be referred to as the test portion. |
| Hardware Specification | No | The paper mentions "constrained by usual GPU (graphics processing unit) platforms" but does not provide specific details on the GPU models, CPU models, or any other hardware used for the experiments. |
| Software Dependencies | No | The paper mentions "TensorFlow (Abadi et al., 2016)" and "Chebfun package (Driscoll et al., 2014)" as tools used, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We represent the solution u by a 5-layer deep neural network with 50 neurons per hidden layer. Furthermore, we let N to be a neural network with 2 hidden layers and 100 neurons per hidden layer. As for the activation functions, we use sin(x). These two networks are trained by minimizing the sum of squared errors loss of equation (3). |