Mitigating Group Bias in Federated Learning: Beyond Local Fairness

Authors: Ganghua Wang, Ali Payani, Myungjin Lee, Ramana Rao Kompella

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Real-data experiments demonstrate the promising performance of our proposed approach for enhancing fairness while retaining high accuracy compared to locally fair training methods.
Researcher Affiliation Collaboration Ganghua Wang EMAIL School of Statistics University of Minnesota Ali Payani EMAIL Cisco Research Myungjin Lee EMAIL Cisco Research Ramana Kompella EMAIL Cisco Research
Pseudocode Yes Algorithm 1 (Fed GFT) Federated learning with globally fair training Algorithm 2 ( Fed Avg ) Federated Average Algorithm 3 ( Fair Fed ) Fairness-aware Federated Average Algorithm 4 ( LRW ) Locally reweighing
Open Source Code No The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository.
Open Datasets Yes 1. Adult dataset (Dua & Graff, 2017). 2. COMPAS dataset (Angwin et al., 2016). 3. Celeb A dataset (Liu et al., 2015).
Dataset Splits Yes For each dataset, we first randomly split the original dataset into three parts, training, validation, and test dataset. The training dataset is further split into disjoint subsets, serving as local datasets of clients. The steps of dividing the training dataset are detailed as follows. First, we generate the proportion of each combination of the group variable A and response variable Y for each client from a Dirichlet distribution Dir(α). A larger α implies more homogeneous clients. Then, we randomly assign the corresponding proportion of data points to each client. Throughout this section, α takes values in 0.5, 5, and 100.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions models and optimizers like 'logistic regression model', 'Res Net18 model', 'ADAM', and 'Multistep LR', but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes Table 3: Hyper-parameters used in our experiments. Dataset: Adult, COMPAS, Celeb A Architecture: Linear, Linear, Res Net18 Number of clients: 10, 10, 10 Communication round: 50, 50, 50 Batch size: 256, 256, 64 Epoch: 1, 3, 1 Optimizer: ADAM, ADAM, ADAM Learning rate: 0.002, 0.01, 0.001 Scheduler: N/A, N/A, Multistep LR Weight decay: N/A, N/A, 0.1 The penalty parameter for Fair Fed is chosen from {0.1, 1, 10} with cross-validation, and for Fed GFT is chosen from {10, 20, 50}. Also, the regularization function used by Fed GFT is J(x) = x2.