Position: AI Agents Need Authenticated Delegation

Authors: Tobin South, Samuele Marro, Thomas Hardjono, Robert Mahari, Cedric Deslandes Whitney, Alan Chan, Alex Pentland

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This position paper argues that authenticated and auditable delegation of authority to AI agents is a critical component of mitigating practical risks and unlocking the value of agents.
Researcher Affiliation Academia 1MIT, Cambridge, MA, USA 2Department of Engineering Science, University of Oxford, UK 3Harvard Law School, Cambridge, MA, USA 4University of California, Berkeley Berkeley, California, USA 5Centre for the Governance of AI, Oxford, UK 6Stanford University, Palo Alto, CA, USA.
Pseudocode No The paper describes conceptual frameworks and discusses protocols in prose, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper is a position paper outlining a theoretical framework and does not mention the release of any source code for the methodology described.
Open Datasets No This is a theoretical position paper that does not present experimental results or use specific datasets that would require public access information.
Dataset Splits No As this is a theoretical position paper without empirical experiments, there is no discussion of dataset splits.
Hardware Specification No This is a theoretical position paper that does not detail experimental execution and therefore does not specify hardware used.
Software Dependencies No This is a theoretical position paper that does not detail experimental execution and therefore does not specify software dependencies with version numbers for experimental replication.
Experiment Setup No This is a theoretical position paper that does not describe empirical experiments and therefore provides no details on experimental setup or hyperparameters.