Simultaneous Multi-Robot Motion Planning with Projected Diffusion Models

Authors: Jinhao Liang, Jacob K Christopher, Sven Koenig, Ferdinando Fioretto

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show SMD consistently outperforms classical and other learning-based motion planners, achieving higher success rates and efficiency in complex multirobot environments. The paper makes the following contributions: (4) Finally, it introduces the first benchmark for MRMP evaluation, featuring complex input maps and diverse scenarios.
Researcher Affiliation Academia 1Department of Computer Science, University of Virginia, Charlottesville, VA 22903, USA 2Department of Computer Science, University of California, Irvine, CA 92697, USA. Correspondence to: Ferdinando Fioretto <EMAIL>.
Pseudocode Yes Algorithm 1 Diffusion Sampling Process in SMD 1: Input: Gaussian Noise x0 T 2: for t = T 1 do 3: Initialize γt 4: for i = 1 M do 5: Sample z N(0, I) 6: Compute g sθ(xi 1 t , t) 7: Update xi t PΩ(xi 1 t + γtg + 2γtz) 8: end for 9: x0 t 1 x M t 10: end for 11: Output: x0 0
Open Source Code Yes The code and implementation are available at https://github.com/ RAISELab-at UVA/Diffusion-MRMP.
Open Datasets Yes To ensure a comprehensive evaluation, this paper also introduces a new benchmark instances set that captures a variety of real-world MRMP challenges (released as supplemental material).
Dataset Splits No The paper describes how test cases are generated (e.g., "Each scenario includes 25 maps with different obstacle configurations.", "For each number of robots, we generate 10 test cases, resulting in 4,000 test instances for each method.") and used for evaluation. However, it does not explicitly provide details about training/validation/test dataset splits used for training the models. The training details section mentions generating training data by running MMD but doesn't specify splits.
Hardware Specification Yes Hardware: For each of our experiments, we used 1 AMD EPYC 7352 24-Core Processor and 1 NVIDIA RTX A6000 GPU.
Software Dependencies Yes Software: The software used for experiments is Rocky Linux release 8.9, Python 3.8, Cuda 11.8, and Py Torch 2.0.0.
Experiment Setup Yes Table 1. Hyperparameters for Training in Experiments. Hyper Parameters Value Diffusion Sampling Step 25 Learning Rate 1e-4 Batch Size 64 Optimizer Adam