Auditing $f$-differential privacy in one run
Authors: Saeed Mahloujifar, Luca Melis, Kamalika Chaudhuri
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with our approach on both simple Gaussian mechanisms as well as a model trained on real data witth DP-SGD. Our experiments show that our auditing procedure can significantly outperform that of (Steinke et al., 2023) (see Figure 1). ... Finally, in Section 4, we describe the experimental setup used to compare the bounds. 4. Experiments |
| Researcher Affiliation | Industry | 1Meta. Correspondence to: Saeed Mahloujifar <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Membership inference in one run game ... Algorithm 2 Reconstruction in one run game ... Algorithm 3 Numerically deciding an upper bound probability of making more than c correct guesses ... Algorithm 4 Simulate the Number of Correct Guesses |
| Open Source Code | Yes | Auditing code Here we include the code to compute empirical epsilon. from scipy.stats import norm import numpy as np |
| Open Datasets | Yes | Experiments on CIFAR-10 ... We also report in Figure 12 our privacy analysis method in the black-box attack setting on the tabular dataset of shopping records Purchase (Shokri et al., 2017). |
| Dataset Splits | No | The paper mentions "all training points from CIFAR-10 n = 50, 000 for the attack" and for another experiment "on half of the dataset chosen at random". While these indicate data usage, they do not provide specific percentages or counts for distinct training, validation, and test splits needed to fully reproduce the data partitioning in a standardized manner for their specific experimental setups. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The "Auditing code" section includes Python code snippets using libraries like numpy and scipy, but it does not specify any version numbers for these libraries or Python itself. |
| Experiment Setup | Yes | We set the batch size to 4, 096, using augumented multiplicity of K = 16 and training for 2, 500 DP-SGD steps. For ε = 8.0, δ = 10 5, we achieved 77% accuracy... We follow the setting proposed by (Sander et al., 2023), which use custom augmentation multiplicity (i.e., random crop around the center with 20 pixels padding with reflect, random horizontal flip and jitter) and apply an exponential moving average of the model weights with a decay parameter of 0.9999. |