Lurie Networks with Robust Convergent Dynamics
Authors: Carl R Richardson, Matthew C. Turner, Steve R. Gunn
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show the improvement in prediction accuracy, generalisation and robustness on a range of simulated dynamical systems, when the graph structure and k-contraction conditions are introduced. These results also compare favourably against other well known stability-constrained models and an unconstrained neural ODE. |
| Researcher Affiliation | Academia | 1 School of Electronics and Computer Science, University of Southampton, Southampton, UK 2 The Alan Turing Institute, London, UK |
| Pseudocode | No | The paper describes the proposed methods and mathematical formulations in sections like "3 Lurie Network" and "4 Parametrisation of k-contracting Lurie Networks", but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | All code was implemented in PyTorch and can be found at https://github.com/CR-Richardson/Lurie Network. |
| Open Datasets | Yes | The state of these datasets is significantly larger than those used in other dynamical systems datasets such as: (i) the LASA dataset Lemme et al. (2015) where the 2-d trajectories are typically stacked to form 4-d or 8-d trajectories; |
| Dataset Splits | Yes | The test sets were formed by holding out 100 trajectories. The input to each model was the initial condition sampled from a uniform distribution with the domain ( 1, +1)3 for the opinion/Hopfield datasets and ( 3, +3)3 for the simple attractor. The full trajectory was then used as the target to train the model. |
| Hardware Specification | Yes | The lowest MSE is presented alongside the mean and standard deviation calculated after training each model N = 3 times on a single T4 GPU (Google Colab). [...] calculated after training each model N = 3 times on a single A100 GPU (Google Colab). |
| Software Dependencies | No | All code was implemented in PyTorch and can be found at https://github.com/CR-Richardson/Lurie Network. The paper mentions PyTorch but does not specify a version number or any other software with versioning information. |
| Experiment Setup | Yes | The training settings used are explicitly detailed in Appendix D. For all models and all datasets, the mean squared error (MSE) loss was used alongside the Adam optimiser. All code was implemented in Py Torch and can be found at https://github.com/CR-Richardson/Lurie Network. [...] Table 5: Default training settings for opinion, Hopfield and attractor datasets. Parameter Value Batches 10 Batch size 100 Test split 0.1 Epochs 100 Criterion Mean squared error Optimiser Adam (default settings) Learning rate (LR) 1 10 2 (no decay). |