Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM Primitives

Authors: Elliot Meyerson, Xin Qiu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This position paper argues that asymptotic analysis with LLM primitives is needed to reason about the efficiency of such decomposed systems, and that insights from such analysis will unlock opportunities for scaling them.
Researcher Affiliation Industry 1Cognizant AI Lab, San Francisco, USA. Correspondence to: Elliot Meyerson <EMAIL>.
Pseudocode Yes Table 1. Overview of Examples. This table gives a high-level summary of the examples described in Section 3. It lists the LLM agents used, and gives pseudocode for the optimistic implementation (i.e., based on an intuitive belief in the power of LLMs) and the optimized one (based on a more careful Lb A design (Def. 2.1)).
Open Source Code No The paper does not provide any information about open-source code being released or available.
Open Datasets No The paper focuses on theoretical analysis and does not describe experiments using specific datasets, thus no information about public datasets is provided.
Dataset Splits No The paper focuses on theoretical analysis and does not describe experiments using specific datasets, thus no information about dataset splits is provided.
Hardware Specification No The paper is a theoretical position paper and does not describe any experiments that would require specific hardware, so no hardware specifications are provided.
Software Dependencies No The paper is a theoretical position paper and does not describe any experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper presents theoretical arguments and example analyses rather than empirical experiments, and therefore does not include specific experimental setup details such as hyperparameters or training configurations.