Approximate Differential Privacy of the $\ell_2$ Mechanism

Authors: Matthew Joseph, Alex Kulesza, Alexander Yu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section discusses experiments evaluating the tightness of our privacy analysis (Section 4.1) as well as the ℓ2 mechanism s error (Section 4.2) and speed (Section 4.3).
Researcher Affiliation Industry 1Google Research New York. Correspondence to: Matthew Joseph <EMAIL>.
Pseudocode Yes Algorithm 1 Term1Upper Bound; Algorithm 2 Term2Lower Bound; Algorithm 3 Check Approximate DP
Open Source Code Yes Experiment code may be found on Github (Google, 2025). https://github.com/google-research/google-research/tree/master/dp_l2
Open Datasets No The paper primarily presents a theoretical analysis of the ℓ2 mechanism's differential privacy properties. While it includes 'experiments' in Section 4, these involve empirical estimation of privacy loss through sampling from the mechanism itself and comparison of theoretical error bounds, rather than using or providing specific publicly available datasets. There is no mention of external datasets like MNIST, ImageNet, etc.
Dataset Splits No The paper does not use external datasets for typical machine learning experiments involving training, validation, and test splits. The 'n samples' mentioned in Section 4.1 are for the empirical estimation of privacy loss in a simulation context, not for partitioning a dataset.
Hardware Specification No The last set of experiments evaluates the speed of the ℓ2 mechanism, as executed on a typical personal computer.
Software Dependencies No There is no closed-form expression for Ix(a, b), but it is a standard function in mathematical libraries like Sci Py (Sci Py, 2024).
Experiment Setup Yes All experiments use the ℓ2 mechanism with nr = n R = 1000. We fix ε = 1, δ = 0.01, and vary d = 1, 2, . . . , 100. ... Throughout, binary searches use tolerance 0.001 and we use (1, 10 5)-DP.