A Review of the Applications of Deep Learning-Based Emergent Communication

Authors: Brendon Boldt, David R Mortensen

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper comprehensively reviews the applications of emergent communication research across machine learning, natural language processing, linguistics, and cognitive science. Each application is illustrated with a description of its scope, an explication of emergent communication s unique role in addressing it, a summary of the extant literature working towards the application, and brief recommendations for near-term research directions.
Researcher Affiliation Academia Brendon Boldt EMAIL Language Technologies Institute Carnegie Mellon University, Pittsburgh, PA, USA David Mortensen EMAIL Language Technologies Institute Carnegie Mellon University, Pittsburgh, PA, USA
Pseudocode No No explicit pseudocode or algorithm blocks are provided. The paper discusses concepts and architectures, for example in Figure 2b, which illustrates a technical architecture diagram but is not pseudocode.
Open Source Code No The paper is a review and does not present new methodology that would require its own source code release. It mentions other tools and frameworks like EGG (Kharitonov et al., 2019) and Tex Rel (Perkins, 2021b) that are relevant to the field being reviewed, but these are not code releases by the authors of this paper.
Open Datasets No The paper is a review and does not conduct new experiments that would involve releasing a dataset. It references datasets used in the reviewed literature, such as "Tex Rel (Perkins, 2021b)", but this is a dataset from other authors discussed in the review, not one provided by the authors of this paper.
Dataset Splits No The paper is a review and does not conduct new experiments that would require specifying dataset splits. It discusses general concepts like "generalizing from training data to test data" in the context of metrics within emergent communication research, but these refer to the methods of other papers, not the authors' own data splits.
Hardware Specification No No specific hardware details (such as GPU models, CPU types, or memory specifications) used for running experiments related to this review paper are provided. The paper discusses computational costs and scaling in general terms, referring to other models like GPT-4, but not for the work presented by its authors.
Software Dependencies No The paper is a review and does not describe computational experiments requiring specific software dependencies with version numbers. It mentions general purpose tools and frameworks like "Py Torch (Paszke et al., 2019)" and "EGG (Emergence of lan Guage in Games) (Kharitonov et al., 2019)" in the context of the literature it surveys, but these are not dependencies for the authors' own work presented in this review.
Experiment Setup No The paper is a literature review and does not describe any experimental setup, hyperparameters, or training configurations of its own. It discusses experimental settings and approaches found in the surveyed literature but does not present new experiments.