Asymptotic Fair Division: Chores Are Easier Than Goods

Authors: Pasin Manurangsi, Warut Suksompong

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that an envy-free allocation exists with high probability provided that m ≥ 2n, and moreover, m must be at least n + Θ(√n) in order for the existence to hold. On the other hand, we prove that a proportional allocation is likely to exist as long as m = ω(1), and this threshold is asymptotically tight. Our results reveal a clear contrast with the allocation of goods, where a larger number of items is necessary to ensure existence for both notions.
Researcher Affiliation Collaboration 1Google Research, Thailand 2National University of Singapore, Singapore EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Matching for Envy-Free Allocation... Algorithm 2 Matching for Proportional Allocation
Open Source Code No The paper does not contain any explicit statement about releasing code, nor does it provide a link to a code repository. It mentions a CoRR preprint [Manurangsi and Suksompong, 2025] but this is not a code release.
Open Datasets No The paper is theoretical and models disutilities as drawn independently from a non-atomic probability distribution D. It does not use or refer to any specific, named datasets.
Dataset Splits No The paper does not use any specific datasets, therefore, no dataset splits are provided.
Hardware Specification No The paper describes theoretical results and algorithms but does not report on experimental evaluations that would require specific hardware. No hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and focuses on mathematical proofs and algorithm design. It does not mention any specific software dependencies or version numbers for implementing or running experiments.
Experiment Setup No The paper presents theoretical findings and algorithms. There are no experiments described that would require an experimental setup with hyperparameters or training configurations.