The Effects of Experience on Deception in Human-Agent Negotiation
Authors: Johnathan Mell, Gale Lucas, Sharon Mozgai, Jonathan Gratch
JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience. This work continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Our first experimental study (see Section 5.1) provided answers to these questions. |
| Researcher Affiliation | Academia | Johnathan Mell EMAIL Gale M. Lucas EMAIL Sharon Mozgai EMAIL Jonathan Gratch EMAIL Institute for Creative Technologies, University of Southern California Los Angeles, CA, 90066 |
| Pseudocode | No | The paper describes the design of agents by detailing their behaviors (e.g., tough strategy, fair strategy, nice attitude, nasty attitude) and policies (Behavior Policies, Expression Policies, Message Policies) in prose, but does not present any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper describes using and modifying the 'IAGO Negotiation platform' (Mell & Gratch, 2017) and its existing agents, but does not explicitly state that the code for the modifications or the specific agents developed in this work are open-source or provide a link to a code repository. |
| Open Datasets | No | The paper describes collecting data through user studies conducted via Amazon's Mechanical Turk, but does not provide any specific access information (such as a link, DOI, repository name, or formal citation) for a publicly available or open dataset. The data used for analysis is collected from participants in the studies themselves. |
| Dataset Splits | No | The paper describes user studies with human participants divided into different experimental conditions (e.g., framing conditions in Study 1 and 2, and agent types in Study 3) with specified sample sizes for each condition. However, it does not refer to or provide standard training, validation, or test dataset splits in the context of machine learning model development or evaluation, as the research is behavioral and empirical rather than focused on data-driven model training. |
| Hardware Specification | No | The paper mentions that the studies were conducted using an 'online negotiation platform' and a 'web-based negotiating interface' (IAGO platform), and that human players were recruited via Amazon Mechanical Turk. However, it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run these platforms or analyze the data. |
| Software Dependencies | No | The paper mentions the 'IAGO Negotiation platform' and 'IAGO API' as the environment for experiments and 'G*power software' for sample size calculations. However, it does not provide specific version numbers for IAGO or any other software libraries, frameworks, or programming languages used in the development or execution of the agents and studies. |
| Experiment Setup | No | The paper details the experimental design of three user studies, including participant recruitment, survey scales (e.g., 7-point Likert scale, 4 bipolar scales), experimental conditions (e.g., 'self vs. agent' framing, 'tough vs. fair' agents, 'nice vs. nasty' agents), and the duration of interaction ('10-minute interaction'). It also describes the agents' strategies (e.g., tough agents starting with unfair offers and gradually conceding, fair agents using a mini-max regret algorithm). However, it does not provide specific technical parameters such as learning rates, batch sizes, number of epochs, or optimizer settings, as these are typically associated with machine learning model training and are not applicable to the behavioral study described. |