The Foundations of Tokenization: Statistical and Computational Concerns

Authors: Juan Luis Gastaldi, John Terilla, Luca Malagutti, Brian DuSell, Tim Vieira, Ryan Cotterell

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models. Based on the category of stochastic maps, this framework enables us to establish general conditions for a principled use of tokenizers and, most importantly, the necessary and sufficient conditions for a tokenizer model to preserve the consistency of statistical estimators. In addition, we discuss statistical and computational concerns crucial for designing and implementing tokenizer models, such as inconsistency, ambiguity, finiteness, and sequentiality.
Researcher Affiliation Academia Juan Luis Gastaldi1 John Terilla2 Luca Malagutti1 Brian Du Sell1 Tim Vieira1 Ryan Cotterell1 1ETH Zürich 2City University of New York EMAIL EMAIL EMAIL
Pseudocode No The paper discusses formal definitions, lemmas, propositions, and theorems (e.g., Lemma 3.1, Theorem 3.1, Proposition 3.1, Proposition 5.1, Proposition 5.2) and provides proofs in the appendix. There are no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide links to a code repository.
Open Datasets No The paper presents a theoretical framework for tokenization and does not conduct experiments that would require specific datasets. Therefore, no information about publicly available datasets is provided for its own methodology.
Dataset Splits No The paper introduces a theoretical framework and does not involve empirical experiments using specific datasets, therefore, there are no mentions of dataset splits.
Hardware Specification No The paper focuses on a formal theoretical framework for tokenization and does not describe any experimental setup or hardware used to run experiments.
Software Dependencies No The paper outlines a formal framework and does not detail any experimental implementation or specific software dependencies with version numbers required to reproduce its findings.
Experiment Setup No The paper focuses on the theoretical foundations of tokenization and does not include any experimental setup details, hyperparameters, or training configurations.