Falcon: Fast Visuomotor Policies via Partial Denoising
Authors: Haojun Chen, Minghao Liu, Chengdong Ma, Xiaojian Ma, Zailin Ma, Huimin Wu, Yuanpei Chen, Yifan Zhong, Mingzhi Wang, Qing Li, Yaodong Yang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validated Falcon in 48 simulated environments and 2 realworld robot experiments. demonstrating a 2-7x speedup with negligible performance degradation, offering a promising direction for efficient visuomotor policy design. 4. Experiments Our experiments aim to address four key questions: (1) Can Falcon accelerate diffusion policy, and does it further enhance speed when integrated with other acceleration algorithms (Section 4.3)? (2) Can Falcon effectively accelerate diffusion policies in real-world robotic settings while preserving performance (Section 4.5)? (3) Does Falcon maintain its acceleration advantage in long-sequence tasks (Section E.3)? (4) Can Falcon retain the ability to express multimodality while achieving speed improvements (Section 4.6)? |
| Researcher Affiliation | Academia | 1Institute for Artificial Intelligence, Peking University 2National Key Laboratory of General Artificial Intelligence, BIGAI 3School of Electronic Engineering and Computer Science, Peking University 4School of Mathematical Sciences, Peking University 5PKU-Psi Bot Joint Lab. Correspondence to: Qing Li <EMAIL>, Yaodong Yang <EMAIL>. |
| Pseudocode | Yes | In Algorithm 1, we present the pseudocode of Falcon, including threshold ϵ, exploration rate δ and latent buffer B. Algorithm 1 Falcon |
| Open Source Code | Yes | The code is available at https://github.com/chjchjchjchjchj/Falcon. |
| Open Datasets | Yes | In simulation, we test Falcon across 48 tasks spanning five widely used benchmarks, including Robo Mimic (Mandlekar et al., 2022), Robo Suite Kitchen (Gupta et al., 2020), Block Push (Shafiullah et al., 2022), Meta World (Yu et al., 2020) and Maniskill2 (Gu et al., 2023). |
| Dataset Splits | Yes | We evaluate Falcon across 48 tasks spanning five widely used benchmarks, including Robo Mimic (Mandlekar et al., 2022), Robo Suite Kitchen (Gupta et al., 2020), Block Push (Shafiullah et al., 2022), Meta World (Yu et al., 2020) and Maniskill2 (Gu et al., 2023). Models and are taken from the Diffusion Policy repository: https://diffusion-policy.cs.columbia.edu/data/experiments/low dim/. We follow the setup and model training in the official 3D Diffusion Policy codebase (Ze et al., 2024b): https://github.com/Yanjie Ze/3D-Diffusion-Policy. |
| Hardware Specification | No | The paper describes the robot hardware (e.g., "7-Do F Real Man RM75-6F arm", "Real Sense D405C camera"), but does not specify the computing hardware (CPU/GPU models, memory) used for running the experiments (training or inference). |
| Software Dependencies | No | The paper mentions using open-source implementations and repositories like Diffusion Policy and SDP, but it does not specify concrete version numbers for software dependencies such as Python, PyTorch, or specific libraries. |
| Experiment Setup | Yes | Each environment uses an action prediction horizon Tp = 16 and an action execution horizon Ta = 8, and all tasks use state-based observations. We construct a CNN-based Diffusion Policy (Chi et al., 2023) with DDPM scheduler using 100 denoising steps and the DDIM/DPMSolver scheduler using 16 denoising steps. Table 10. Hyperparameters and Memory Cost in Robomimic. (This table lists specific values for ϵ, δ, kmin, |B| for different tasks and models). |