An Evaluation of Communication Protocol Languages for Engineering Multiagent Systems

Authors: Amit K Chopra, Samuel H Christie V, Munindar P. Singh

JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We contribute a rich evaluation of diverse and modern protocol languages. Among the selected languages, Scribble is based on session types; Trace-C and Trace-F on trace expressions; HAPN on hierarchical state machines, and BSPL on information causality. Our contribution is four-fold. One, we contribute important criteria for evaluating protocol languages. Two, for each criterion, we compare the languages on the basis of whether they are able to specify elementary protocols that go to the heart of the criterion. Three, for each language, we map our findings to a canonical architecture style for multiagent systems, highlighting where the languages depart from the architecture. Four, we identify design principles for protocol languages as guidance for future research.
Researcher Affiliation Academia Amit K. Chopra EMAIL Lancaster University Lancaster, LA1 4WA, UK Samuel H. Christie V EMAIL North Carolina State University Raleigh, NC 27695, USA Lancaster University Lancaster, LA1 4WA, UK Munindar P. Singh EMAIL North Carolina State University Raleigh, NC 27695, USA
Pseudocode No The paper contains several "Listing" blocks (e.g., Listing 1: Purchase (Use Case 1) in Scribble, Listing 2: Scribble projections of Purchase (Listing 1) for buyer and seller) which present code examples in various protocol languages to illustrate use cases. However, these are not pseudocode or algorithm blocks in the sense of structured, step-by-step procedures for an algorithm developed by the authors.
Open Source Code No The paper mentions existing tools for the evaluated languages, such as "Scribble’s tools (Scribble, 2018)" and states that "The Scribble, Trace-F, and BSPL protocols have been verified in their respective tooling." However, there is no explicit statement from the authors providing or releasing source code for their own methodology or analysis presented in this paper.
Open Datasets No The paper evaluates communication protocol languages by using "minimal use cases for protocols" (e.g., Use Case 1 (Purchase), Use Case 3 (Flexible purchase)) and "elementary protocols". These are conceptual examples and scenarios, not data-driven experiments that rely on publicly available datasets.
Dataset Splits No The paper does not use empirical datasets for experiments. Instead, it relies on conceptual "use cases" and protocol specifications for its evaluation. Therefore, the concept of dataset splits is not applicable.
Hardware Specification No The paper presents a conceptual and comparative evaluation of communication protocol languages. It does not describe any computational experiments that would require specific hardware specifications like GPUs, CPUs, or cluster resources.
Software Dependencies No The paper focuses on a conceptual evaluation of communication protocol languages. While it mentions the existence of tools for some of these languages (e.g., "Scribble’s tools"), it does not specify any particular software dependencies (like programming languages or libraries with version numbers) used for the authors' own research methodology or experiments, as no such implementation details are provided.
Experiment Setup No The paper conducts a conceptual and comparative evaluation of protocol languages by analyzing their theoretical properties and their ability to model various use cases. It does not involve experimental setups, hyperparameters, training configurations, or other system-level settings typically found in empirical research.