Cognitive Architectures for Language Agents

Authors: Theodore Sumers, Shunyu Yao, Karthik R Narasimhan, Thomas L. Griffiths

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (Co ALA). Co ALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decisionmaking process to choose actions. We use Co ALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, Co ALA contextualizes today s language agents within the broader history of AI and outlines a path towards language-based general intelligence.
Researcher Affiliation Academia Theodore R. Sumers Shunyu Yao Karthik Narasimhan Thomas L. Griffiths Princeton University EMAIL
Pseudocode No The paper does not contain explicitly labeled pseudocode or algorithm blocks. It uses diagrams (Figure 2, Figure 4) to illustrate conceptual architectures and processes, but these are not formatted as pseudocode.
Open Source Code No The paper provides a link to a repository: "A Co ALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents." This link refers to a collection of other works related to language agents, not the source code for the methodology or framework proposed within this specific paper.
Open Datasets No This paper proposes a theoretical framework and surveys existing work. It does not conduct its own experiments or use specific datasets for empirical validation, therefore no concrete access information for datasets is provided.
Dataset Splits No This paper proposes a theoretical framework and surveys existing work. It does not conduct its own experiments or define dataset splits for empirical validation, therefore no dataset split information is provided.
Hardware Specification No This paper proposes a theoretical framework and surveys existing work. It does not conduct its own experiments, therefore no hardware specifications are mentioned for experimental execution.
Software Dependencies No This paper proposes a theoretical framework and surveys existing work. It does not conduct its own experiments, therefore no specific software dependencies with version numbers are listed for experimental replication.
Experiment Setup No This paper proposes a theoretical framework and surveys existing work. It does not conduct its own experiments, therefore no experimental setup details, hyperparameters, or training configurations are described.