Federated Minimax Optimization with Client Heterogeneity

Authors: Pranay Sharma, Rohan Panda, Gauri Joshi

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results support our theoretical claims. In this section, we evaluate the empirical performance of the proposed algorithms. We consider a robust neural training problem ... and a fair classification problem. Figures such as Figure 3, Figure 4, and Figure 5 show 'Test Accuracy' over 'Number of Communications' for various settings, demonstrating empirical evaluation.
Researcher Affiliation Academia Pranay Sharma EMAIL Department of Electrical and Computer Engineering Carnegie Mellon University; Rohan Panda EMAIL Department of Electrical and Computer Engineering Carnegie Mellon University; Gauri Joshi EMAIL Department of Electrical and Computer Engineering Carnegie Mellon University. All authors are affiliated with Carnegie Mellon University, which is an academic institution, and use .edu email addresses.
Pseudocode Yes Algorithm 1 Fed-Norm-SGDA and Fed-Norm-SGDA+
Open Source Code No The paper does not contain any explicit statements about releasing their source code, nor does it provide a link to a code repository. It mentions 'implemented using parallel training tools in Py Torch 1.0.0 and Python 3.6.3' but this refers to third-party tools, not their own implementation.
Open Datasets Yes We consider a robust neural training problem ... on CIFAR10 dataset, with VGG11 model. ... Fair Classification ... on the CIFAR10 dataset, with the VGG11 model. CIFAR10 is a widely recognized public dataset.
Dataset Splits No The paper describes data heterogeneity across clients using Dirichlet distribution and discusses client sampling for local epochs, as well as partial client participation levels (e.g., P=5, P=10, FCP n=15). However, it does not specify the explicit train/test/validation splits for the datasets (e.g., CIFAR10) used in the experiments.
Hardware Specification Yes Our experiments were run on a network of n = 15 clients, each equipped with an NVIDIA Titan X GPU.
Software Dependencies Yes Our algorithm was implemented using parallel training tools in Py Torch 1.0.0 and Python 3.6.3.
Experiment Setup Yes For both robust NN Training and fair classification experiments, we use batch-size of 32 in all the algorithms. Momentum parameter 0.9 is used only in Momentum Local SGDA(+). ... Table 3: Parameter values for experiments in robust NN training experiments. ... Table 4: Parameter values for experiments in fair classification experiments. These tables specify Client Learning Rate (ηc y), Client Learning Rate (ηc x), and Server Learning Rate (γs x = γs y) values. Clients sample the number of epochs they run locally via uniform sampling over the set {2 . . . , E}, i.e., τi Unif[2 : E].