Private Lossless Multiple Release
Authors: Joel Daniel Andersson, Lukas Retschmeier, Boel Nelson, Rasmus Pagh
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To confirm our theoretical claims, we empirically evaluate the accuracy of lossless multiple release against a baseline algorithm that uses independent releases. We evaluate the impact by focusing on noise in isolation to avoid capturing the effect of specific queries. The baseline algorithm is a simple Gaussian mechanism where noise is drawn independently for each consecutive release. To showcase our algorithm s performance, we demonstrate how the cost incurred by uncoordinated releases grow with the amount of releases, in contrast to the lossless multiple release where there is no additional cost. We repeat our experiments 106 times, and measure the variance of the noise. The plot (Figure 2) shows, as expected, that our mechanism does not lose any utility from making multiple releases. |
| Researcher Affiliation | Academia | 1Basic Algorithms Research Copenhagen (BARC), Denmark 2University of Copenhagen, Denmark. Correspondence to: Joel Daniel Andersson <EMAIL>, Lukas Retschmeier <EMAIL>, Boel Nelson <EMAIL>, Rasmus Pagh <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Gaussian Multiple Release Algorithm 2 Factorization Multiple Release Algorithm 3 Histogram Gradual Release Algorithm 4 Generic Multiple Release Parameters Algorithm 5 Simplified Generic Multiple Release Algorithm 6 Efficient Histogram Gradual Release |
| Open Source Code | No | The paper does not contain any explicit statement about providing source code or a link to a code repository. |
| Open Datasets | No | The paper does not mention using any specific publicly available datasets nor provides access information for any dataset. The empirical evaluation focuses on noise properties rather than specific query results on data. |
| Dataset Splits | No | The paper does not mention specific dataset splits (e.g., training, validation, test) for experimental reproduction. The empirical evaluation appears to be based on simulations or synthetic data generation for noise analysis. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers used for implementing the methods or running the experiments. |
| Experiment Setup | Yes | Budgets are spaced evenly on a logarithmic scale between ρ = 0.001 and ρ = 5 on the x-axis. |