Low-Rank Representation of Reinforcement Learning Policies

Authors: Bogdan Mazoure, Thang Doan, Tianyu Li, Vladimir Makarenkov, Joelle Pineau, Doina Precup, Guillaume Rabusseau

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns. (Abstract) and 5. Experimental Results (Section title).
Researcher Affiliation Academia Bogdan Mazoure EMAIL Thang Doan EMAIL Tianyu Li EMAIL School of Computer Science Mc Gill University Montreal, QC, Canada Vladimir Makarenkov EMAIL Département d Informatique Université du Québec à Montréal Montreal, QC, Canada Joelle Pineau EMAIL Doina Precup EMAIL School of Computer Science Mc Gill University Montreal, QC, Canada Guillaume Rabusseau EMAIL Department of Computer Science and Operations Research Mila CIFAR AI Chair Université de Montréal Montreal, QC, Canada
Pseudocode Yes Algorithm 1: Quantile discretization (Section 4) and Algorithm 2: RKHS policy embedding (Section 4) and A.9.1 Python Pseudocode (Appendix).
Open Source Code Yes The code is included with Supplemental Files as a zip file; all dependencies can be installed using Python s package manager. Upon publication, the code would be available on Github. Additionally, we include the model s weights as well as the discretized policy for Pendulum-v0 environment.
Open Datasets Yes A complete description of the data collection process, including sample size. We use standard benchmarks provided in Open AI Gym (Brockman et al., 2016).
Dataset Splits Yes An explanation of how samples were allocated for training / validation / testing. We do not use a training-validation-test split, but instead report the mean performance (and one standard deviation) of the policy at evaluation time across 10 trials.
Hardware Specification No A description of the computing infrastructure used. All runs used 1 CPU for all experiments with 8Gb of memory.
Software Dependencies No The code is included with Supplemental Files as a zip file; all dependencies can be installed using Python s package manager.
Experiment Setup Yes A.9.3 Experimental parameters table: Hyperparameters Turntable Pendulum Continuous Mountain Car b A 100 15 10 b S N/A 35 35 Optimizer Adam Architecture 256 Learning rate 1e-03 Hidden dimension 256 # rollouts 100 Torch seeds 0 to 9