Robust System Identification: Finite-sample Guarantees and Connection to Regularization
Authors: Hank Park, Grani A. Hanasusanto, Yingying Li
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate substantial improvements in real-world system identification and online control tasks, outperforming existing methods. |
| Researcher Affiliation | Academia | Hyuk Park, Grani A. Hanasusanto, Yingying Li University of Illinois Urbana-Champaign EMAIL |
| Pseudocode | No | The paper describes methods and algorithms in text, such as 'robust LSE framework' and 'online linear quadratic control algorithms', but does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions implementation details, 'Both the proposed approach and benchmark models are implemented in Python 3.7. Specifically, a neural network model is implemented using TensorFlow (Abadi et al., 2015), while the optimization problem (7) is modeled with the CVXPY (Diamond & Boyd, 2016) interface and solved using the commercial solver MOSEK (Ap S, 2024).', but does not explicitly state that the authors' source code for their methodology is made publicly available, nor does it provide a link. |
| Open Datasets | Yes | Using the wind speed data from fedesoriano (2022), we implemented the optimized HHT-NAR method from Chen et al. (2024) (here simply referred to as LSE), along with our robust version, and evaluated the prediction accuracy for the next 50 daily wind speeds. |
| Dataset Splits | Yes | For the robust LSE, we use a 3-fold cross-validation procedure to determine an initial value of the regularization parameter, as follows. We split the samples into three equal-sized subsets where two of the three subsets are put together to learn the robust estimate. The resulting estimate is then tested on the remaining set for all ϵ = (a 10b)/T where a {1, 3, 5, 7, 9} and b { 3, . . . , 3}. This process is repeated three times for different partitions of the samples to choose the ϵ that performs best overall. |
| Hardware Specification | Yes | All experiments were conducted on a laptop equipped with a 6-core, 2.3 GHz Intel Core i7 CPU and 16 GB of RAM. |
| Software Dependencies | Yes | Both the proposed approach and benchmark models are implemented in Python 3.7. Specifically, a neural network model is implemented using TensorFlow (Abadi et al., 2015), while the optimization problem (7) is modeled with the CVXPY (Diamond & Boyd, 2016) interface and solved using the commercial solver MOSEK (Ap S, 2024). |
| Experiment Setup | Yes | For the robust LSE, we use a 3-fold cross-validation procedure to determine an initial value of the regularization parameter, as follows. We split the samples into three equal-sized subsets where two of the three subsets are put together to learn the robust estimate. The resulting estimate is then tested on the remaining set for all ϵ = (a 10b)/T where a {1, 3, 5, 7, 9} and b { 3, . . . , 3}. This process is repeated three times for different partitions of the samples to choose the ϵ that performs best overall. |