Modeling Latent Non-Linear Dynamical System over Time Series

Authors: Ren Fujiwara, Yasuko Matsubara, Yasushi Sakurai

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we run experiments on synthetic data where there are ground truth systems to evaluate the accuracy and robustness of our method. We compare La No Lem to three state-of-the-art baselines to measure accuracy and robustness, and we show the importance of latent states for them. Compared to state-of-the-art methods on 71 chaotic benchmark datasets, our model achieves competitive performance for estimating dynamics while consistently outperforming state-of-the-art in prediction tasks.
Researcher Affiliation Academia Ren Fujiwara, Yasuko Matsubara, Yasushi Sakurai SANKEN, Osaka University EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Optimization algorithm (X) ... Algorithm 2: Learning ({ˆs(t)}N t=1, M, ω)
Open Source Code Yes Code https://github.com/renfujiwara/La No Lem
Open Datasets Yes We use synthetic data obtained from dysts database (Gilpin 2021), which provides data, equations, and dynamical properties for chaotic systems exhibiting strange attractors and coming from disparate scientific fields.
Dataset Splits No The paper mentions evaluating on
Hardware Specification No The experimental settings are detailed in Appendix E, which contains a detailed description of the experimental conditions and hyperparameters used in our study. However, the main body of the paper does not specify any hardware details.
Software Dependencies No The experimental settings are detailed in Appendix E, which contains a detailed description of the experimental conditions and hyperparameters used in our study. However, the main body of the paper does not specify any software dependencies with version numbers.
Experiment Setup No The experimental settings are detailed in Appendix E, which contains a detailed description of the experimental conditions and hyperparameters used in our study. The main text mentions varying noise ratios (5%, 25%, 50%) but does not provide specific hyperparameters like learning rates, batch sizes, or optimizer settings within the main content.