Nonparametric adaptive control and prediction: theory and randomized algorithms

Authors: Nicholas M. Boffi, Stephen Tu, Jean-Jacques E. Slotine

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As an illustration of the method, we demonstrate the ability of the randomized approximation algorithm to learn a predictive model of a 60-dimensional system consisting of ten point masses interacting through Newtonian gravitation. We show empirically that the extra expressivity afforded by deep representations can lead to improved performance at the expense of the closed-loop stability that is rigorously guaranteed and consistently observed for kernel machines. We now study the empirical performance of the nonparametric method and its randomized approximation.
Researcher Affiliation Collaboration Nicholas M. Boffi EMAIL Courant Institute of Mathematical Sciences New York University New York, NY 10012, USA; Stephen Tu EMAIL Google Brain Robotics New York, NY 10011, USA; Jean-Jacques E. Slotine EMAIL Nonlinear Systems Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA
Pseudocode No The paper describes methods through mathematical formulations and prose, such as the derivation of Lyapunov-based update laws and gradient flows, but it does not present any explicit pseudocode blocks or algorithms labeled as such.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide links to any code repositories.
Open Datasets No The paper illustrates its methods using a "synthetic adaptive control problem" and a "60-dimensional system consisting of ten point masses interacting through Newtonian gravitation." These are problem descriptions or simulated systems, not references to specific, publicly available datasets with access information.
Dataset Splits No The paper evaluates its methods on a synthetic adaptive control problem and a simulated m-body system. Since these are not pre-existing datasets, there is no mention of training/test/validation splits. The experimental setup describes the simulation parameters, but not dataset partitioning.
Hardware Specification No The paper describes simulations in Section 7, but it does not specify any hardware details such as GPU models, CPU types, or other computing resources used to run these simulations.
Software Dependencies No The paper mentions "forward Euler integration with a fixed timestep t = 0.001" and refers to "random Fourier features described in Section 5.2." However, it does not provide specific names or version numbers for any software libraries, programming languages, or other dependencies used in the implementation of the algorithms.
Experiment Setup Yes Implementation We apply a nonparametric input generated by the Gaussian kernel K(x, y) = exp x y 2 2 2σ2 I, σ = 0.1. For its randomized approximation, we use the random Fourier features described in Section 5.2. Both the randomized and nonparametric adaptive laws are obtained by forward Euler integration with a fixed timestep t = 0.001. We set γ = 20 for the random feature adaptation law and γ = 10 for the neural network. We consider single hidden-layer neural networks with width of 32 or 64 neurons and the swish activation function.