Robust Multi-Agent Reinforcement Learning with Stochastic Adversary

Authors: Ziyuan Zhou, Guanjun Liu, Mengchu Zhou, Weiran Guo

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the robustness of models trained by ATSA, this study conducts extensive experiments on Star Craft II tasks and autonomous driving scenarios. The results show that ATSA is robust against diverse perturbations of observations while maintaining outstanding performance in perturbation-free environments, and b) it outperforms the state-of-the-art methods.
Researcher Affiliation Academia 1The School of Computer Science and Technology, Tongji University 2The School of Information and Electronic Engineering, Zhejiang Gongshang University 3The Helen and John C. Hartmann Department of Electrical and Computer Engineering, New Jersey Institute of Technology. Correspondence to: Guanjun Liu <EMAIL>.
Pseudocode Yes Algorithm 1 ATSA
Open Source Code No The paper does not explicitly provide a link to source code, nor does it contain an unambiguous statement that the code for the described methodology is being released or made publicly available.
Open Datasets Yes We evaluate our adversarial training framework on two challenging benchmarks: the Star Craft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) and a Connected and Autonomous Vehicles (CAV) environment (Chen et al., 2023).
Dataset Splits No The paper specifies environment settings (e.g., episode horizon) for SMAC and CAV. However, it does not provide specific train/test/validation dataset splits for pre-collected data, which is common in reinforcement learning where data is generated through interaction with the environment.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions optimizers like RMSprop and classical MARL methods (VDN, QMIX) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes For SDor, the actor and critic networks consist of two MLP layers with a GRU (hidden size 64) inserted between them. RMSprop (Hinton, 2012; Wen & Zhou, 2024) is used to optimize all parameters, with both actor and critic employing a learning rate of 0.0005. The target networks are updated every 200 episodes. The temperature parameter α and the set {αi}i M follow the same configuration as in (Zhang et al., 2021b). Additionally, our framework introduces an extra hyperparameter, κ, which regulates the influence of the SDor-STor loss function on the SDor s policy. In our experiments, κ for VDN-based protagonist agent is selected from the set {0.001, 0.005, 0.01}, while for QMIX-based protagonist agent is chosen from the set {0.001, 0.005, 0.01, 0.025}, depending on the environment to optimize performance.