No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs

Authors: Krzysztof Kacprzyk, Mihaela van der Schaar

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have fitted Semantic ODE with the inductive bias described earlier and versions of SINDy with different sparsity constraints. The results can be seen in Table 2. ... Table 3: Comparison of Average RMSE obtained by different models. Average performance over 5 random seeds and data splits is shown with standard deviations in the brackets. ... Additional experiments can be found in Appendix B. Details on experiments are available in Appendix E.
Researcher Affiliation Academia Krzysztof Kacprzyk University of Cambridge EMAIL Mihaela van der Schaar University of Cambridge EMAIL
Pseudocode Yes A block diagram is presented in Figure 6, and the pseudocode of the training procedure can be found in Appendix C. ... Algorithm 1 Algorithm for learning the Composition Map Fcom. ... Algorithm 2 Algorithm for Learning the Property Map Fprop.
Open Source Code Yes All experimental code is available at https://github.com/krzysztof-kacprzyk/Semantic ODE.
Open Datasets Yes Pharmacokinetic model The pharmacokinetic dataset is based on the pharmacokinetic model developed by Woillard et al. (2011)... Logistic growth The logistic growth dataset is described by the following equation (Verhulst, 1845)... Mackey-Glass The Mackey-Glass dataset is described by the following Mackey-Glass equation (Mackey & Glass, 1977)... The tumor growth dataset is based on the dataset collected by Wilkerson et al. (2017)... The drug concentration dataset is based on data collected by (Woillard et al., 2011).
Dataset Splits Yes For 5 different seeds, the dataset is randomly split into training, validation, and test datasets with ratios 0.7 : 0.15 : 0.15.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions several software components like Py SINDy, MIOSR, Py SR, torchdiffeq, Adam optimizer, Optuna, Py Torch, and sympy. However, it does not provide specific version numbers for any of these components, which is required for reproducibility.
Experiment Setup Yes We set the batch size to 32 and train for 200 epochs using Adam optimizer (Kingma & Ba, 2017). We tune hyperparameters using Optuna (Akiba et al., 2019) for 20 trials. Ranges for the hyperparameters are shown in Table 13. ... The property maps are trained using L-BFGS as implemented in Py Torch. We fix the penalty term for the difference between derivatives to be 0.01 and we perform hyperparameter tuning of each property sub-map to find the optimal learning rate (between 1e-4 and 1.0) and the penalty term for the first derivative at the last transition point (between 1e-9 and 1e-1).