Generalizing Group Fairness in Machine Learning via Utilities

Authors: Jack Blandin, Ian A. Kash

JAIR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we provide an experimental analysis on an environment where classification fairness metrics fail to appropriately measure fairness due to Assumption 1. In order to ensure that our analysis is consistent with other group fairness works, we leverage the fairness-comparison benchmark of Friedler et al. for data preprocessing, algorithm implementation, and fairness measurement calculations (Friedler et al., 2019).
Researcher Affiliation Academia Jack Blandin EMAIL Ian A. Kash EMAIL University of Illinois at Chicago, Department of Computer Science, Chicago, IL 60607 USA
Pseudocode No The paper describes its framework and methodology using definitions, equations, and prose, but does not include any explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes The full code repository to reproduce the results in this paper is available at https://github.com/jackblandin/group-fairness-in-machine-learning-via-utilities.
Open Datasets Yes Dataset We consider the loan application scenario described by the German Credit Dataset (Dua & Graff, 2017), which consists of 1,000 loan application records.
Dataset Splits Yes Results We execute and measure each algorithm using 10-fold cross-validation. For each performance measurement, we report the average value as well as the 10th and 90th percentiles.
Hardware Specification No The paper does not explicitly mention any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions evaluating
Experiment Setup No The paper describes the algorithms evaluated (Decision Tree, Support Vector Machine, Feldman Decision Tree, Feldman SVM, Feldman Logistic Regression, Zafar Fair) and their general approaches, as well as the parameters for the utility fairness framework (W, C, τ, ρ) and the use of 10-fold cross-validation. However, it does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) for the training of these machine learning models.