Fair Allocation with Diminishing Differences

Authors: Erel Segal-Halevi, Avinatan Hassidim, Haris Aziz

JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using simulations, we compare the various fairness criteria in terms of their probability of existence, and their probability of being fair by the underlying cardinal valuations. We find that necessary-DD-proportionality fares well in both measures. We also consider envy-freeness and Pareto optimality under diminishing-differences, as well as chore allocation under the analogous condition increasing-differences.
Researcher Affiliation Academia Erel Segal-Halevi EMAIL Ariel University, Ariel 40700, Israel Avinatan Hassidim EMAIL Bar-Ilan University, Ramat-Gan 5290002, Israel Haris Aziz EMAIL UNSW Sydney and Data61 CSIRO, Australia
Pseudocode Yes Theorem 4.1 implies that there is a polynomial-time algorithm to check whether X NDD Y ; see Algorithm 1. Algorithm 1 Checking the NDD relation Algorithm 2 Balanced round-robin allocation of items
Open Source Code Yes The Python code used for the experiments is available at Git Hub: https://github.com/erelsgl/fair-diminishing-differences
Open Datasets No To simulate valuations with partial correlation, we determined for each item a market value drawn uniformly at random from [1, 2]. We determined the cardinal value of each item to each agent as the item s market value plus noise drawn uniformly at random from [ A, A], where A [0, 1] is a parameter. Based on the cardinal values, we determined the agent s ordinal ranking. Then, for each such utility-profile, we checked various statistics: How many allocations are Nec PR/NDDPR/PDDPR/Pos PR according to the ordinal rankings; How many Nec PR/NDDPR/PDDPR/Pos PR allocations are indeed proportional according to the underlying cardinal valuations;
Dataset Splits No The paper describes generating 'randomly-generated instances' for simulations but does not specify any training/test/validation dataset splits, as it does not involve training models on pre-defined datasets.
Hardware Specification No The paper does not explicitly mention the specific hardware (CPU, GPU, etc.) used for running the simulations or experiments.
Software Dependencies No The paper mentions 'Python code used for the experiments' but does not specify any version numbers for Python or any libraries used (e.g., numpy, pandas, scikit-learn, PyTorch, TensorFlow).
Experiment Setup Yes We did this experiment for n {2, 3} agents, for different values of A {0.1, . . . , 1}, and for different numbers l of items per agent l {2, . . . , 8} when n = 2 or l {2, . . . , 5} when n = 3. For each combination, we checked 1000 randomly-generated instances.