Grounding Video Models to Actions through Goal Conditioned Exploration
Authors: Yunhao Luo, Yilun Du
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the proposed approach on 8 tasks in Libero, 6 tasks in Meta World, 4 tasks in Calvin, and 12 tasks in i Thor Visual Navigation. We show how our approach is on par with or even surpasses multiple behavior cloning baselines trained on expert demonstrations while without requiring any action annotations. |
| Researcher Affiliation | Academia | Yunhao Luo , Yilun Du Georgia Tech , Brown , Harvard |
| Pseudocode | Yes | Algorithm 1 Grounding Video Model to Actions |
| Open Source Code | No | The paper provides a project website link (https://video-to-action.github.io/) but does not explicitly state that the source code for the described methodology is hosted there, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We validate the proposed approach on 8 tasks in Libero (Liu et al., 2024), 6 tasks in Meta World (Yu et al., 2020), 4 tasks in Calvin (Mees et al., 2022), and 12 tasks in i Thor (Kolve et al., 2017) Visual Navigation. |
| Dataset Splits | Yes | The video model is trained on the visual image sequences of the demonstrations provided in Libero, where we use 20 episodes per task, thereby 160 demonstrations in total. ... We save a model checkpoint after every 10 video exploration episodes and evaluate each checkpoint on 25 test-time problems. |
| Hardware Specification | Yes | For each of our experiments, we used 1 NVIDIA RTX 3090 GPU or a GPU of similar configuration. |
| Software Dependencies | Yes | Software: The computation platform is installed with Red Hat 7.9, Python 3.9, Py Torch 2.0, and Cuda 11.8 |
| Experiment Setup | Yes | We provide detailed hyperparameters for training our model in Table 13 and 14. Table 13: Hyperparameters of our Goal-conditioned Policy in Libero, Meta-World, and Calvin. Action Prediction Horizon 16 Action Horizon 8 Diffusion Time Step 100 Iterations 200K Batch Size 64 Optimizer Adam Learning Rate 1e-4 Input Image Resolution (128,128). Table 15: Hyperparameters for Random Action Bootstrapping in each Environment. Warm-start steps Nr 10k Episode Length 120 Action Chunk Size lc 24 # of Initial Episodes nr 50 Periodic Frequency qr 500 # of Addition Episodes n r 2 Replay Buffer Size 1200 |