Feedback Favors the Generalization of Neural ODEs
Authors: Jindou Jia, Zihan Yang, Meng Wang, Kexin Guo, Jianfei Yang, Xiang Yu, Lei Guo
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive tests including trajectory prediction of a real irregular object and model predictive control of a quadrotor with various uncertainties, are implemented, indicating significant improvements over state-of-the-art model-based and learning-based methods. |
| Researcher Affiliation | Academia | 1Beihang University 2Hangzhou Innovation Institute of Beihang University 3Nanyang Technological University |
| Pseudocode | Yes | Algorithm 1 Learning neural feedback through domain randomization |
| Open Source Code | Yes | Codes are available at https://sites.google.com/view/feedbacknn. |
| Open Datasets | Yes | We test the effectiveness of the proposed method on an open-source dataset (Jia et al., 2024) |
| Dataset Splits | Yes | 21 trajectories are used for training, while 9 trajectories are used for testing. |
| Hardware Specification | Yes | It takes around 30 mins to run 50 epochs on a laptop with 13th Gen Intel(R) Core(TM) i9-13900H. ... As for the neural feedback form, due to the optimization problem being non-convex, a satisfactory result usually takes 10 mins to 1 hour of training time on a laptop with Intel(R) Core(TM) Ultra 9 185H 2.30 GHz. |
| Software Dependencies | No | The paper mentions optimizers like 'RMSprop optimizer' and 'Adam optimizer' but does not specify their version numbers or any other software dependencies with version information. |
| Experiment Setup | Yes | In training, we use RMSprop optimizer with the default learning rate of 0.001. The network is trained with a batch size of 20 for 400 iterations. ... In training, we use RMSprop optimizer with the learning rate of 0.01. The network is trained with a batch size of 100 for 2000 iterations. ... In training, we use Adam optimizer with the default learning rate of 0.001. The network is trained with a batch size of 20 for 1000 iterations. |