Privacy-Preserving Federated Convex Optimization: Balancing Partial-Participation and Efficiency via Noise Cancellation
Authors: Roie Reshef, Kfir Yehuda Levy
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We ran Algorithm 1 on MNIST using a logistic regression. The parameters are G = 2 785 = 39.6, L = 785/2 = 392.5, D = 0.1, which brings us S = 118.1. Our model has d = 10 785 = 7850 parameters. We compared our algorithm (called Our Work ) to SGD with noise, inspired by (Abadi et al., 2016) (called Noisy SGD ), and to the other work (Lowy & Razaviyayn, 2023) (called Other Work ). We kept the same parameter in all 3 algorithms to the best of our abilities, and in all tests the total data samples used across all machines is n = 60,000. ... We compare both the test accuracy and running time. For our first experiment, we fix m = 50, M = 100, and compare various values of ρ. We show our results in Table 1. |
| Researcher Affiliation | Academia | 1Faculty of Electrical and Computer Engineering, Technion, Haifa, Israel. Correspondence to: Roie Reshef <EMAIL>, Kfir Yehuda Levy <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 DP-µ2-FL with Partial-Participation |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We ran Algorithm 1 on MNIST using a logistic regression. |
| Dataset Splits | No | The paper mentions using MNIST and a total of 60,000 samples but does not specify how the dataset was split into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'logistic regression' and 'SGD' but does not specify any software libraries or their version numbers (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | We ran Algorithm 1 on MNIST using a logistic regression. The parameters are G = 2 785 = 39.6, L = 785/2 = 392.5, D = 0.1, which brings us S = 118.1. Our model has d = 10 785 = 7850 parameters. ... For our first experiment, we fix m = 50, M = 100, and compare various values of ρ. |