MushroomRL: Simplifying Reinforcement Learning Research

Authors: Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Mushroom RL is accompanied by a benchmarking suite collecting experimental results of state-of-the-art deep RL algorithms, and allowing to benchmark new ones. The result is a library from which RL researchers can significantly benefit in the critical phase of the empirical analysis of their works. [...] We developed the Mushroom RL Benchmarking Suite, a framework based on Mushroom RL for running large-scale benchmarking experiments on the already provided algorithms, or new ones implemented by users. Figure 2 shows several empirical results obtained using the Mushroom RL Benchmarking Suite. Our results are comparable with the ones in literature assert the quality of the implementation of the algorithms in Mushroom RL.
Researcher Affiliation Academia 1TU Darmstadt IAS, Hochschulstraße 10, 64289 Darmstadt, Germany 2Politecnico di Milano DEIB, Piazza Leonardo da Vinci 32, 20133 Milano, Italy
Pseudocode No The paper describes a software library and its features, and discusses benchmarking. It does not contain any structured pseudocode or algorithm blocks for a methodology being presented.
Open Source Code Yes Mushroom RL stable code, tutorials, and documentation can be found at https://github.com/Mushroom RL/mushroom-rl. [...] while the code repository is available at https://github.com/Mushroom RL/mushroom-rl-benchmark.
Open Datasets Yes Mushroom RL provides an interface to these libraries, in order to integrate their functionalities in the framework, e.g. an interface for Gym environments, support for regression with scikit-learn models and Pytorch neural networks. [...] The Mushroom RL Benchmarking Suite, a framework based on Mushroom RL for running large-scale benchmarking experiments on the already provided algorithms, or new ones implemented by users. Figure 2 shows several empirical results obtained using the Mushroom RL Benchmarking Suite. Our results are comparable with the ones in literature assert the quality of the implementation of the algorithms in Mushroom RL. Further results and details on the Mushroom RL Benchmarking Suite, e.g. the hyper-parameters used in the experiments, can be found at https:// mushroom-rl-benchmark.readthedocs.io/en/latest/index.html, while the code repository is available at https://github.com/Mushroom RL/mushroom-rl-benchmark.
Dataset Splits No The paper discusses the use of reinforcement learning environments and benchmarks, but it does not specify any training/test/validation dataset splits. This is typical for reinforcement learning papers where agents interact directly with environments.
Hardware Specification No The paper mentions support for 'GPU computation' through PyTorch and TensorFlow, but it does not provide any specific hardware details such as GPU models, CPU types, or memory amounts used for running experiments.
Software Dependencies No Compatible Standard Python libraries useful for RL tasks have been adopted: Scientific calculus: Numpy, Scipy; Basic ML: Numpy-ml, Scikit-Learn; RL benchmark: Open AI Gym, Deep Mind Control Suite (Tassa et al., 2018), Pybullet, Mu Jo Co (Todorov et al., 2012), ROS; Neural networks and GPU computation: Py Torch, Tensorflow. The paper lists several software dependencies but does not provide specific version numbers for these components.
Experiment Setup No The paper states that 'Further results and details on the Mushroom RL Benchmarking Suite, e.g. the hyper-parameters used in the experiments, can be found at https:// mushroom-rl-benchmark.readthedocs.io/en/latest/index.html'. While hyperparameters are available, they are not provided in the main text of the paper.