Sample Complexity Reduction via Policy Difference Estimation in Tabular Reinforcement Learning

Authors: Adhyyan Narang, Andrew Wagenmaker, Lillian Ratliff, Kevin G. Jamieson

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The contributions of this paper are entirely theoretical.
Researcher Affiliation Academia Adhyyan Narang University of Washington EMAIL Andrew Wagenmaker University of California, Berkeley EMAIL Lillian J. Ratliff University of Washington EMAIL Kevin Jamieson University of Washington EMAIL
Pseudocode Yes Algorithm 1 PERP: Policy Elimination with Reference Policy (informal) ... Algorithm 2 PERP: Policy Elimination with Reference Policy
Open Source Code No The NeurIPS checklist states 'NA' for open access to data and code, justifying it with 'The contributions of this paper are entirely theoretical.' The paper does not provide any link or statement about open-source code availability.
Open Datasets No The contributions of this paper are entirely theoretical. Therefore, the paper does not discuss training on specific datasets or their availability.
Dataset Splits No The contributions of this paper are entirely theoretical. Therefore, the paper does not discuss dataset splits for training, validation, or testing.
Hardware Specification No The contributions of this paper are entirely theoretical. Therefore, the paper does not specify hardware used for experiments.
Software Dependencies No The contributions of this paper are entirely theoretical. Therefore, the paper does not specify software dependencies with version numbers for experiments.
Experiment Setup No The contributions of this paper are entirely theoretical. Therefore, the paper does not provide details about an experimental setup.