Offline Reinforcement Learning with Additional Covering Distributions

Authors: Chenjie Mao

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study learning optimal policies from a logged dataset, i.e., offline RL, with function general approximation. Despite the efforts devoted, existing algorithms with theoretic finitesample guarantees typically assume exploratory data coverage or strong realizable function classes (e.g., Bellman-completeness), which is hard to be satisfied in reality. While there are recent works that successfully tackle these strong assumptions, they either require the gap assumptions that could only be satisfied by part of MDPs or use the behavior regularization that needs to be carefully controlled. To solve this challenge, we provide finite-sample guarantees for a simple algorithm based on marginalized importance sampling (MIS), showing that sample-efficient offline RL for general MDPs is possible with only a partial coverage dataset (instead of assuming a dataset covering all possible policies) and weak realizable function classes (assuming function classes containing simply one function) given additional side information of a covering distribution. We demonstrate that the covering distribution trades off prior knowledge of the optimal trajectories against the coverage requirement of the dataset, revealing the effect of this inductive bias in the learning processes.
Researcher Affiliation Academia Chenjie Mao EMAIL School of Computer Science and Technology Huazhong University of Science and Technology
Pseudocode Yes Algorithm 1: VOPR (Value-Based Offline RL with Policy Ratio) Input : Dataset D, value function class Q, distribution density ratio class W, policy ratio function class B, and covering distribution dc
Open Source Code No The paper does not contain any explicit statements about open-sourcing the code, nor does it provide links to a code repository or mention code in supplementary materials.
Open Datasets No The paper mentions learning from "a logged dataset, i.e., offline RL" and "a pre-collected dataset" but does not specify any particular dataset, provide links, or reference known public datasets that were used for experiments. The work is theoretical in nature.
Dataset Splits No As the paper is theoretical and does not describe experiments performed on specific datasets, there is no mention of training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and analysis, therefore it does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not detail any experimental implementation. Consequently, it does not list specific software dependencies with version numbers.
Experiment Setup No The paper presents an algorithm and its theoretical analysis but does not describe any practical experiments or their setup, including hyperparameters or system-level training settings.