Graph Diffusion for Robust Multi-Agent Coordination
Authors: Xianghua Zeng, Hang Su, Zhengyi Wang, Zhiyuan Lin
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across various multi-agent environments demonstrate that MCGD significantly outperforms existing state-of-the-art baselines in coordination performance and exhibits superior robustness to dynamic environmental changes. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 2Department of Computer Science and Technology, Institute for AI, BNRist Center, Tsinghua Bosch Joint ML Center, Tsinghua University, Beijing, China. Correspondence to: Hang Su <EMAIL>. |
| Pseudocode | Yes | The whole sampling process is summarized in Algorithm 1. ... The training details of our MCGD framework are provided in Appendix 7.1.2. ... We summarize the training process of MCGD framework in Algorithm 2. |
| Open Source Code | No | Our team is currently working on deploying the proposed method in real-world multi-robot hunting scenarios. Although quantitative results are not yet available for inclusion in this version, we are actively collecting data and refining the deployment process. We plan to report these findings as part of a more extensive evaluation in a future extension of this work. |
| Open Datasets | Yes | We conduct extensive evaluations on three following wellestablished multi-agent benchmarks: Multi-Agent Particle Environments (MPE) (Lowe et al., 2017). ... Multi-Agent Mu Jo Co (MAMu Jo Co) (Peng et al., 2021). ... Star Craft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019). The offline datasets are sourced from (Formanek et al., 2023)... |
| Dataset Splits | No | The paper mentions using |
| Hardware Specification | Yes | All experiments are conducted on Linux servers with a 64-core Intel Xeon Platinum 8336C CPU (2.30 GHz) and an NVIDIA A800-SXM4-80GB GPU. |
| Software Dependencies | No | The paper mentions 'Linux servers' as the operating system and 'Adam optimizer' as the optimization algorithm but does not specify their versions or the versions of any other key software libraries or frameworks used in the implementation. |
| Experiment Setup | Yes | Across all experiments, the diffusion parameter K is varied within the range [50, 200], with a learning rate of 2e 4, a batch size of 32, a reward discount γ of 0.99, and the Adam optimizer utilized. |