Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

Authors: Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper initiates the theoretical study of policy finetuning... We study the policy finetuning problem theoretically in finite-horizon Markov Decision Processes (MDPs) with H time steps, S states, and A actions.
Researcher Affiliation Collaboration Tengyang Xie UIUC EMAIL Nan Jiang UIUC EMAIL Huan Wang Salesforce Research EMAIL Caiming Xiong Salesforce Research EMAIL Yu Bai Salesforce Research EMAIL
Pseudocode Yes Algorithm 1 Pessimistic Value Iteration with Reference-Advantage Decomposition (PEVI-ADV)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper studies theoretical aspects of Reinforcement Learning in episodic Markov Decision Processes (MDPs) and does not use or provide access information for any publicly available or open dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits (e.g., training, validation, or test splits).
Hardware Specification No The paper is theoretical and does not report on empirical experiments that would require or specify hardware details.
Software Dependencies No The paper is theoretical and focuses on algorithms and their sample complexity, thus it does not provide specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and theoretical analysis, thus it does not provide specific experimental setup details such as hyperparameters or training configurations.