Learning Optimal Feedback Operators and their Sparse Polynomial Approximations
Authors: Karl Kunisch, Donato Vásquez-Varas, Daniel Walter
JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the present work we propose, analyze, and numerically test a learning approach to obtain optimal feedback laws. [...] Finally, in Section 9 we present four numerical experiments which show that our algorithm is able to solve non-linear and high (here the dimension is 40) dimensional control problems in a standard laptop environment. |
| Researcher Affiliation | Academia | Karl Kunisch EMAIL Radon Institute for Computational and Applied Mathematics Austrian Academy of Sciences and Institute of Mathematics and Scientific Computing University of Graz [...] Donato V asquez-Varas EMAIL Radon Institute for Computational and Applied Mathematics Austrian Academy of Sciences [...] Daniel Walter EMAIL Institut f ur Mathematik Humboldt-Universit at zu Berlin |
| Pseudocode | Yes | Algorithm 1 Sparse polynomial learning algorithm. Require: An initial guess θ0 RM,γ > 0, κ > 0, β (0, 1), T > 0, r [0, 1], s0 (0, ). Ensure: An approximated stationary point θ of (20). |
| Open Source Code | No | The proposed methodology was implemented in python and tested on 4 problems. These experiments demonstrate the robustness and efficiency of the approach for obtaining approximative feedback-laws for nonlinear and high dimensional problems. |
| Open Datasets | No | For every experiment we shall specify the computational time horizon, and the sets of initial conditions for training and testing. For all experiments, we utilized a Monte Carlo based uniform sampling for the training sets. |
| Dataset Splits | Yes | randomly choose 5 sets Yj train with j {1, . . . , 5} each of cardinality 10 from Ω. A test set of cardinality 200 was randomly and uniformly sampled from Ω. From each Yj train we take a sequence of increasing subsets {Yj i }10 i=1, such that for each i {1, . . . , 10} the cardinality of Yj i is i. |
| Hardware Specification | No | Finally, in Section 9 we present four numerical experiments which show that our algorithm is able to solve non-linear and high (here the dimension is 40) dimensional control problems in a standard laptop environment. |
| Software Dependencies | No | The proposed methodology was implemented in python and tested on 4 problems. |
| Experiment Setup | Yes | We set T = 10, l = 10, γ = 10 10, and r = 0.1, and randomly choose 5 sets Yj train with j {1, . . . , 5} each of cardinality 10 from Ω. [...] All learning problems were solved by Algorithm 1 with the choice (40) in steps 4 and 8, κ = 0.5, and β = 0.9, except for the Optimal Consensus for Cucker Smale problem where (38) was used. In all cases the stopping criterion (43) was fulfilled with gtol = 10 3 and tol = 10 5. |