Learning Deep Dissipative Dynamics
Authors: Yuji Okamoto, Ryosuke Kojima
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate the robustness of our method against out-of-domain input through applications to robotic arms and fluid dynamics.The contributions of this study are as follows: (iv) We confirmed the effectiveness of our method with three experiments with benchmark data.We conducted three experiments to evaluate our proposed method. The first experiment uses a benchmark dataset generated from a mass-spring-damper system, which is a classic example from physics and engineering. In the next experiment, we evaluate our methods by an n-link pendulum system, a nonlinear dynamical system related to robotic arm applications. Finally, we applied our method to learning an input-output fluid system using a fluid simulator. |
| Researcher Affiliation | Academia | Yuji Okamoto1 *, Ryosuke Kojima1, 2 * 1 Kyoto University, Japan 2 RIKEN BDR, Japan EMAIL |
| Pseudocode | No | The paper does not contain any sections explicitly labeled "Pseudocode" or "Algorithm," nor does it present any structured, step-by-step procedures in a code-like format. The methodology is described using mathematical equations and textual explanations. |
| Open Source Code | Yes | Code https://github.com/kojima-r/Deep Dissipative Model |
| Open Datasets | No | The paper describes using "benchmark dataset generated from a mass-spring-damper system," "n-link pendulum system," and "fluid simulations." While it references a paper for the fluid model ("Sch afer et al. 1996"), it does not provide concrete access information (links, DOIs, or specific citations to data repositories) for the datasets used in their experiments. It appears the data was generated from these systems for the purpose of the study. |
| Dataset Splits | Yes | The first and fourth rows of this table shows the results of evaluation using N-inputs rectangle signals for training data and different 0.1 N rectangle input signals for testing. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running its experiments. It discusses computational aspects in terms of methods (neural ODEs, Euler method) but not the underlying hardware. |
| Software Dependencies | No | The paper mentions using "neural ODE" as an internal solver and the "Euler method" for simplicity, along with "Optuna" for hyperparameter optimization. However, it does not specify any version numbers for these software components or libraries, which is required for reproducibility. |
| Experiment Setup | Yes | The hyperparameters including the number of lay-ers in the neural networks, the learning rate, optimizer, and the weighted decay are determined using the tree-structured Parzen estimator (TPE) implemented in Optuna (Akiba et al. 2019) (see Appendix L). For simplicity in our experiments, the sampling step t for the output y is set as constant and the Euler method is used to solve neural ODEs. The initial state x0 in this ODE is fixed as 0 for simplicity. |