A Survey on Transformers in Reinforcement Learning

Authors: Wenzhe Li, Hao Luo, Zichuan Lin, Chongjie Zhang, Zongqing Lu, Deheng Ye

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we seek to systematically review motivations and progress on using Transformers in RL, provide a taxonomy on existing works, discuss each sub-field, and summarize future prospects.
Researcher Affiliation Collaboration 1Tsinghua University 2Peking University 3BAAI 4Tencent Inc. 5Washington University in St.Louis
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It describes methodologies conceptually and provides a taxonomy of existing works.
Open Source Code No This is a survey paper and does not present original experimental work for which source code would typically be released. No statements about code availability or repository links are provided for the paper's own methodology.
Open Datasets No This is a survey paper and does not present original experimental work that uses a specific dataset. While it references many existing public datasets in its review of other works, it does not provide access information for a dataset used in its own experiments.
Dataset Splits No This is a survey paper and does not conduct original experiments, therefore it does not provide information about dataset splits.
Hardware Specification No This is a survey paper and does not conduct original experiments, therefore no hardware specifications are provided.
Software Dependencies No This is a survey paper and does not implement new software or conduct experiments, therefore it does not list specific software dependencies with version numbers.
Experiment Setup No This is a survey paper and does not present original experimental work, therefore it does not describe any specific experimental setup details such as hyperparameters or training configurations.