Learning Spatiotemporal Dynamical Systems from Point Process Observations
Authors: Valerii Iakovlev, Harri Lähdesmäki
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we demonstrate properties of our method and compare it against other methods from the literature. Our datasets are generated by three commonly-used PDE systems: Burgers (models nonlinear 1D wave propagation), Shallow Water (models 2D wave propagation under the gravity), and Navier-Stokes with transport (models the spread of a pollutant in a liquid over a 2D domain). In addition to the synthetic data, we include a real-world dataset Scalar Flow (Eckert et al., 2019)... Table 3: Model comparisons. MAE ( ) and Log-lik (per event) ( ) on test data. |
| Researcher Affiliation | Academia | Valerii Iakovlev Harri L ahdesm aki Department of Computer Science, Aalto University, Finland EMAIL |
| Pseudocode | No | The paper describes the methodology in prose and mathematical formulations. There are no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code and datasets can be found in our github repository. |
| Open Datasets | Yes | Our datasets are generated by three commonly-used PDE systems: Burgers (models nonlinear 1D wave propagation), Shallow Water (models 2D wave propagation under the gravity), and Navier-Stokes with transport... We include a real-world dataset Scalar Flow (Eckert et al., 2019)... We obtain data for this system from the PDEBench dataset (Takamoto et al., 2023)... Source code and datasets can be found in our github repository. |
| Dataset Splits | Yes | For all datasets we use 80%/10%10% train/validation/test splits... We use the first 0.5 seconds as the context for the initial state inference... For Scalar Flow dataset... We train our and other models on 80 trajectories, and use 10 trajectories for validation, and 10 for testing. |
| Hardware Specification | Yes | In all cases our model has at most 3 million parameters, and training takes at most 1.5 hours on a single Ge Force RTX 3080 GPU. |
| Software Dependencies | No | The paper mentions using 'torchdiffeq (Chen, 2018) package' and 'torchquad (G omez et al., 2024) package' but does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility. |
| Experiment Setup | Yes | The training is done for 25k iterations with learning rate 3e-4 and batch size 32. We use the adaptive ODE solver (dopri5) from torchdiffeq package with relative and absolute tolerance set to 1e-5. We use Adam W (Loshchilov & Hutter, 2019) optimizer with constant learning rate 3e-4 (we use linear warmup for first 250 iterations)... Monte Carlo integration is done using sample size of one... with 32 randomly sampled points for training, and 256 randomly sampled points for testing... We set resolution of the uniform temporal grid τ1, . . . , τn to n = 50. |