Fairness-Accuracy Trade-Offs: A Causal Perspective

Authors: Drago Plecko, Elias Bareinboim

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach is evaluated across multiple real-world datasets, providing new insights into the tension between fairness and accuracy. In this section, we perform the causal fairness-accuracy analysis described in Sec. 2 on the Census 2018 dataset (Ex. 2). Additional analyses of the COMPAS (Ex. 3) and UCI Credit (Ex. 4) datasets are reported in Appendix E.
Researcher Affiliation Academia Department of Computer Science, Columbia University EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Path-Specific Excess Loss Attributions, Algorithm 2: Causally-Fair Constrained Learning (CFCL)
Open Source Code Yes All code for reproducing the experiments can be found in our Github repository https://github.com/dplecko/causal-acc-decomp.
Open Datasets Yes In this section, we perform the causal fairness-accuracy analysis described in Sec. 2 on the Census 2018 dataset (Ex. 2). Additional analyses of the COMPAS (Ex. 3) and UCI Credit (Ex. 4) datasets are reported in Appendix E. (Referring to UCI Credit dataset) Yeh, I.-C. 2016. Default of Credit Card Clients. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C55S3H.
Dataset Splits No Input: training data Dt, evaluation data De, set S, precision ϵ. The text mentions 'splits the data into train and evaluation folds, Dt and De' but does not provide specific percentages, sample counts, or a detailed splitting methodology.
Hardware Specification No No specific hardware details (such as CPU/GPU models, memory, or cloud instance types) are mentioned in the paper.
Software Dependencies No No specific software libraries or frameworks with version numbers are provided. The paper mentions using neural networks and the Adam optimizer, but without specific versions for any libraries like TensorFlow or PyTorch.
Experiment Setup No Algorithm 2 mentions 'fit a neural network to solves the optimization problem in Eqs. 52-56 with λ = λmid on Dt to obtain the predictor b Y S(λmid)'. It also specifies 'nh hidden layers and nv nodes in each layer' and a 'precision ϵ' for the binary search. However, the paper does not provide concrete values for these hyperparameters (nh, nv, ϵ, or initial λhigh) used in the experiments.