Settling the Maximin Share Fairness for Scheduling among Groups of Machines
Authors: Bo Li, Fangxiao Wang, Xing Shiji
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we first resolve this problem and design a polynomial-time algorithm that computes a 2-approximate MMS allocation via linear programming techniques. We complement this result with a hard instance, showing that no algorithm can be better than (2 1 n)-approximate MMS, where n is the number of machines. Thus the approximation ratio 2 is asymptotically tight. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China. Correspondence to: Bo Li <EMAIL>, Fangxiao Wang <EMAIL>, Shiji Xing <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Assign Items to Groups Input: Ordered homogeneous instance I = (G, M, c). Output: Allocation B = (B1, . . . , Bn). |
| Open Source Code | No | The paper does not provide any statement regarding the availability of open-source code or links to repositories for the methodology described. |
| Open Datasets | No | The paper does not mention the use of any specific publicly available datasets for experimental evaluation. It discusses theoretical models and algorithms for fair allocation. |
| Dataset Splits | No | The paper does not describe any experimental evaluation using datasets, and therefore no dataset split information is provided. |
| Hardware Specification | No | The paper describes theoretical algorithms and proofs, and does not include any experimental results, thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper describes theoretical algorithms and proofs, and does not include any experimental results, thus no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper describes theoretical algorithms and proofs, and does not include any experimental results, thus no experimental setup details like hyperparameters or training configurations are provided. |