Learning to Design Fair and Private Voting Rules

Authors: Farhad Mohsin, Ao Liu, Pin-Yu Chen, Francesca Rossi, Lirong Xia

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To add to our theoretical work, on the practical front we present two frameworks to design new voting rules with varying levels of fairness, privacy, and economic efficiency. ... Experimentally, we show that the learned family of voting rules succeeds in achieving high fairness and efficiency satisfaction levels, based on simulations on synthetic data. ... Finally, we experimentally verify our theoretical results for the fairness-efficiency-privacy trade-off, showing that for moderate privacy requirements (when the noise level is not very high), the loss in efficiency and fairness is small. ... All our experimental results focus on two-group scenarios.
Researcher Affiliation Collaboration Farhad Mohsin EMAIL Ao Liu EMAIL Rensselaer Polytechnic Institute Pin-Yu Chen EMAIL Francesca Rossi EMAIL IBM Research Lirong Xia EMAIL Rensselaer Polytechnic Institute
Pseudocode Yes Algorithm 1 Learning framework with sample mixing: β-mix. Algorithm 2 Data Set Generation with β-sampling Algorithm 3 Learning framework with soft labeling: β-soft.
Open Source Code No The paper mentions using the XGBoost library (Chen & Guestrin, 2016) but does not provide any statement or link to their own source code for the methodology described.
Open Datasets No For the experiments, we first do uniform sampling for all agents from both groups. ... On the other hand, we use the PL model for simulating group behavior. ... While creating training and test data, for each data point, first new PL parameters are sampled randomly. Then, a single preference profile is sampled using these group PL parameters.
Dataset Splits No For each setting, we generated 2.4 million data points to learn from. Based on the learned voting rule, we compute Condorcet efficiency and average rank utility for the preference profiles in the test set. (Section 8.1) The paper mentions generating data for learning and testing, but it does not specify the exact split percentages or counts for training, validation, and test sets. It only mentions the total data points generated for learning.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU/CPU models, memory, or cloud resources.
Software Dependencies No For our experiments on β-ML rules, we chose boosted gradient trees for learning in Algorithm 1, making use of the XGBoost (Chen & Guestrin, 2016) library. (Section 8.1) The paper mentions using the XGBoost library but does not specify its version number or any other software dependencies with their versions.
Experiment Setup No The paper describes the general methodology for generating synthetic data and training models using β-Mix and β-Soft methods, including the mixing parameter β. However, it does not provide specific hyperparameter values for the boosted gradient trees (e.g., learning rate, number of estimators, max depth for XGBoost) or other detailed training configurations required for reproduction.