DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making
Authors: Ziru Wang, Mengmeng Wang, Jade Dai, Teli Ma, Guo-Jun Qi, Yong Liu, Guang Dai, Jingdong Wang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive results demonstrate that Dyna Mind significantly outperforms the baselines across several simulation benchmarks and real-world scenarios. |
| Researcher Affiliation | Collaboration | 1SGIT AI Lab, State Grid Corporration of China, 2Zhejiang University of Technology, 3The Hong Kong University of Science and Technology, Guangzhou, 4Westlake University, 5Zhejiang University, 6Baidu. |
| Pseudocode | No | The paper describes the methodology in detail across sections 3.1, 3.2, and 3.3, but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code, a specific repository link, or an explicit code release statement for the methodology described. |
| Open Datasets | Yes | Environments. We validate our method on simulation benchmarks and real-world scenarios, with simulation benchmarks including robotic manipulation tasks: LORe L Sawyer (Nair et al., 2022) and Franka Kitchen (Gupta et al., 2020), and a navigation task, Baby AI (Chevalier-Boisvert et al., 2018). |
| Dataset Splits | Yes | The Baby AI (Chevalier-Boisvert et al., 2018) dataset includes various environment configurations... The dataset contains one million expert trajectories for each level, but only 0.1% are used for training, allowing evaluation under limited data conditions. |
| Hardware Specification | Yes | All models are trained on the LORe L Sawyer task suite (batch size 64) using identical hardware and settings (NVIDIA A800 GPU). |
| Software Dependencies | No | The paper mentions using a pre-trained Distil BERT model but does not provide specific version numbers for any software dependencies, programming languages, or libraries used in the implementation. |
| Experiment Setup | Yes | In our experiments, λ is set to 1 for simplicity. All models are trained on the LORe L Sawyer task suite (batch size 64) using identical hardware and settings (NVIDIA A800 GPU). In the Baby AI experiments, the default interval hyperparameter C is set to 30. To evaluate its impact, we test values of 5, 10, and 100. It is implemented as a 1-layer, 4-head Transformer. In practice, both loss terms are assigned equal weights (1.0) and jointly optimized during training. In practice, the action supervision loss is weighted by 1.0 in the overall training objective. |