Deep Koopman Learning using Noisy Data
Authors: Wenjian Hao, Devesh Upadhyay, Shaoshuai Mou
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance of the proposed method is demonstrated on several standard benchmarks. We then compare the presented method with similar methods proposed in the latest literature on Koopman learning. ... In this experiment, we first gather noise-free state-input pairs D = {(xt, ut)}T t=0 from the aforementioned four examples... To facilitate training and testing, we allocate 80% of DG to train DKND (denoted as DG train), reserving the remaining 20% for testing (denoted as DG test). For performance evaluation, we compute the root mean square deviation (RMSD) over the test dataset DG test... As presented in Tables. 1-2, the proposed DKND method achieves smaller average RSMD and standard deviation on testing data when compared to other methods, even as the complexity of the dynamics is increasing. |
| Researcher Affiliation | Collaboration | Wenjian Hao EMAIL School of Aeronautics and Astronautics Purdue University Devesh Upadhyay EMAIL Saab, Inc. Shaoshuai Mou EMAIL School of Aeronautics and Astronautics Purdue University |
| Pseudocode | Yes | Algorithm 1: Deep Koopman learning with the noisy data (DKND) |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. It only refers to a third-party library documentation: 'We refer to https://pytorch.org/docs/stable/nn.html for the definition of functions Linear(), ReLU()'. |
| Open Datasets | Yes | cartpole (xt R4, ut R) and lunar lander (xt R6, ut R2) examples from the Openai gym Brockman et al. (2016), and one real-world example of unmanned surface vehicles (xt R6, ut R2), of which the details can be found in Li et al. (2024). |
| Dataset Splits | Yes | To facilitate training and testing, we allocate 80% of DG to train DKND (denoted as DG train), reserving the remaining 20% for testing (denoted as DG test). |
| Hardware Specification | Yes | Compute device Apple M2, 16GB RAM |
| Software Dependencies | No | The paper mentions 'Optimizer Adam' and implicitly PyTorch (from the provided URL for DNN function definitions), but it does not specify version numbers for these software components. For example, 'Adam' is a type of optimizer, not a specific software with a version, and no PyTorch version is mentioned. |
| Experiment Setup | Yes | Accuracy (ϵ) 1e-4 Training epochs (S) 1e4 Learning rate (αk) 1e-5 The number of data pairs (T) 500 600 1600 600 |