You Get What You Give: Reciprocally Fair Federated Learning

Authors: Aniket Murhekar, Jiaxin Song, Parnian Shahkar, Bhaskar Ray Chaudhury, Ruta Mehta

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theoretical results through experiments, demonstrating that MShap outperforms baselines in terms of fairness and efficiency. We empirically evaluate our mechanisms on five datasets: MNIST, Fashion MNIST, CIFAR-10, Lumpy Skin Disease, and a synthetic quadratic regression dataset.
Researcher Affiliation Academia 1Department of Computer Science, University of Illinois, Urbana-Champaign, Urbana, USA 2Department of Industrial Systems Engineering, University of Illinois, Urbana-Champaign, Urbana, USA 3Department of Computer Science, University of California, Irvine, Irvine, USA. Correspondence to: Bhaskar Ray Chaudhury <EMAIL>, Ruta Mehta <EMAIL>.
Pseudocode Yes Algorithm 1 Fed BR-Shap protocol
Open Source Code No The paper describes the 'Fed BR-Shap protocol' in Algorithm 1, but does not provide any explicit statement about releasing its implementation code or a link to a repository.
Open Datasets Yes We empirically evaluate our mechanisms on five datasets: MNIST, Fashion MNIST, CIFAR-10, Lumpy Skin Disease, and a synthetic quadratic regression dataset. Afshari Safavi, E. Lumpy skin disease dataset, 2021. URL https://doi.org/10.17632/7pyhbzb2n9.1.
Dataset Splits Yes Each client a) in MNIST has 175-191 batches of training data and 17-18 batches of testing data; b) in Fashion MNIST has 173-192 batches of training data and 17-18 batches of testing data; c) in CIFAR has 27 batches of training data and 6 batches of testing data. For the other two datasets, we set the dataset of one agent to consist of 70% positive data points and 30% negative data points, and the other agent vice versa.
Hardware Specification Yes Our experiments were conducted on the Illinois Campus Cluster configured with one node with 16 cores, Fedora@9.4 operating system, and one A100 GPU.
Software Dependencies No The paper mentions 'Fedora@9.4 operating system' but does not provide specific versions for any other key software components, libraries, or frameworks used for the experiments.
Experiment Setup Yes We set the number of agents as 30 for the three image-based datasets (MNIST, Fashion MNIST, and CIFAR-10) and randomly sample 10 agents to update their shares in each iteration. For the remaining two datasets, we set the number of agents to two and perform no sampling. We adopt the statistical heterogeneous setting... We run the best response dynamics for all three mechanisms for 1000 iterations. We set the step size δ of best response dynamics to be 10, and the learning rate α is set as 0.1 for the local training. ...we run the training for 100 epochs...