Private Model Personalization Revisited

Authors: Conor Snedeker, Xinyu Zhou, Raef Bassily

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results in Figure 1 are obtained via data features from N(0, Id) with problem parameters n = 20, 000, d = 50, k = 2, and m = 10. Our data labels are generated as in Assumption 6 given label noise sampled from N(0, R2) with R = 0.01. We use local GD and non-private Fed Rep as baselines for our comparison. See Appendix B.4 for details1. Figure 1: Graph of population MSE over choice of privacy parameter ϵ [1, 8] for synthetic data comparing Algorithm 1 to Priv-Alt Min in (Jain et al., 2021).
Researcher Affiliation Academia 1Department of Computer Science & Engineering, The Ohio State University 2Department of Computer Science & Engineering and the Translational Data Analytics Institute (TDAI), The Ohio State University. Correspondence to: Conor Snedeker <EMAIL>, Xinyu Zhou <EMAIL>.
Pseudocode Yes Algorithm 1 Private Fed Rep for linear regression Algorithm 2 Private Initialization for Private Fed Rep Algorithm 3 Private Representation Learning for Personalized Classification
Open Source Code Yes Note as well this Git Hub repository with a copy of our code.
Open Datasets No The results in Figure 1 are obtained via data features from N(0, Id) with problem parameters n = 20, 000, d = 50, k = 2, and m = 10. Our data labels are generated as in Assumption 6 given label noise sampled from N(0, R2) with R = 0.01.
Dataset Splits Yes Let S0 i {(xi,j, yi,j) : j [m/2]} i [n] Let S1 i Si \ S0 i i [n]. Assume for simplicity that m is even. We partition Si = S0 i S1 i where S0 i = {z1,j, . . . , z m 2 ,j} and S1 i = {z m 2 +1,j, . . . , zm,j} for each i [n].
Hardware Specification No No specific hardware details are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers are mentioned in the paper.
Experiment Setup Yes Our problem is instantiated with d = 50, k = 2, m = 10, and n = 20, 000. For Fed Rep we prune our hyperparameters, deciding on T = 5 and learning rate η = 2.5 with clipping parameter ψ = 10. Similarly, Priv-Alt Min with iterations optimized for T = 5 and clipping parameter 10 4. The Gaussian mechanism variance for both algorithms is calculated using the privacy parameter ϵ,δ = 16 log(1.25/δ) ϵ with δ = 10 6.