MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles

Authors: Jing Han, Binwei Yan, Tianyu Guo, Zheyuan Bai, Mengyu Zheng, Hanting Chen, Ying Nie

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments and thorough ablation studies on various LLMs and agent benchmarks, demonstrating the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1School of Artificial Intelligence, Beijing University of Posts and Telecommunications 2Huawei Noah s Ark Lab. Correspondence to: Ying Nie <EMAIL>.
Pseudocode Yes Algorithm 1 Fine-tuning LLM with Mo R for agent tasks.
Open Source Code Yes This project is publicly available at https://mor-agent.github.io/.
Open Datasets Yes We adopt the publicly available datasets including Tool Bench (Qin et al., 2023), the combination of APIGen (Liu et al., 2024d) and Tool ACE (Liu et al., 2024c) and glaive-function-calling-v2 1, Math Genie (Lu et al., 2024) to fine-tune the corresponding downstream agent tasks respectively.
Dataset Splits Yes For each role, 80k samples are randomly selected as the training set, while 5k samples are sampled as the validation set.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers are mentioned in the paper.
Experiment Setup Yes Also, we set the learning rate to 5e-5, with 4 epochs of fine-tuning by Mo R, and α1 and α2 in Equation 8 set to 1e-3 and 1e-4, respectively.