LoGoFair: Post-Processing for Local and Global Fairness in Federated Learning
Authors: Li Zhang, Chaochao Chen, Zhongxuan Han, Qiyong Zhong, Xiaolin Zheng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three realworld datasets further illustrate the effectiveness of the proposed Lo Go Fair framework. In this section, we comprehensive evaluate the proposed Lo Go Fair method on three publicly available real-world datasets. |
| Researcher Affiliation | Academia | Li Zhang, Chaochao Chen*, Zhongxuan Han, Qiyong Zhong, Xiaolin Zheng Zhejiang University EMAIL, EMAIL |
| Pseudocode | Yes | We present the federated post-processing procedure in Algorithm 1 of Appendix C, along with its efficiency, privacy analysis, and additional discussion. |
| Open Source Code | No | The paper does not contain any explicit statement about providing access to source code, nor does it provide any links to a code repository. |
| Open Datasets | Yes | Datasets We consider three real-world benchmarks, Adult (Asuncion, Newman et al. 2007), ENEM (INEP 2018), and Celeb A (Zhang et al. 2020), which are wellestablished for assessing fairness issues in FL (Ezzeldin et al. 2023; Chang and Shokri 2023; Duan et al. 2024). |
| Dataset Splits | Yes | (1) Firstly, we partition each dataset into a 70% training set and the remaining 30% for test set, while post-processing models use half of training set as validation set following previous post-processing works (Xian, Yin, and Zhao 2023; Chen, Klochkov, and Liu 2024). |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | (2) Secondly, to simulate the statistical heterogeneity in FL context, we control the heterogeneity of the sensitive attribute distribution at each client by determining the proportion of local sensitive group data based on a Dirichlet distribution Dir(α) as proposed by Ezzeldin et al. (2023). In this case, each client will possess a dominant sensitive group, and a smaller value of α will further reduce the data proportion of the other group, which indicates greater heterogeneity across clients. (3) Thirdly, The number of participating clients is set to 5 to simulate the FL environment. (4) fourthly, we evaluate the FL model with Accuracy (Acc), global fairness metric Mglobal and maximal local fairness metric among clients Mlocal. Since we are interesting in DP and EO criteria, the model s fairness is assessed by local and global DP/EO metrics (Mlocal DP/EO, Mglobal DP/EO), smaller values of fairness metrics denote a fairer model. Here we set δl,c = δg = 0.01. |