Private Mechanism Design via Quantile Estimation
Authors: Yuanyuan Yang, Tao Xiao, Bhuvesh Kumar, Jamie Morgenstern
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present the experimental results for the Differentially Private (DP) Myerson mechanism, comparing its performance against two standard mechanism design baselines: the Myerson (optimal) auction and the Vickrey (second-price) auction. ... In Table 1, under non i.i.d distribution settings where there is a significant revenue gap between the Vickrey auction and the Myerson auction, DPMyerson achieves a notable revenue increase (at least 11% ) over the second-price mechanism. |
| Researcher Affiliation | Collaboration | Yuanyuan Yang University of Washington EMAIL Tao Xiao Shanghai Jiao Tong University EMAIL Bhuvesh Kumar Snap Inc. EMAIL Jamie Morgenstern University of Washington EMAIL |
| Pseudocode | Yes | Algorithm 1 DP Myerson, Bounded Distribution DPMYER(V, ϵq, ϵa, h, ϵp) Algorithm 2 DP Estimation for Optimal Revenue DPKOPT(V, ϵq, ϵa, ϵp, η) Algorithm 3 Two-Stage Algorithm ABOUNDED |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | Our experiments are conducted on normal and lognormal distributions truncated to positive domains. ... The paper describes using 'normal and lognormal distributions' but does not specify a publicly available dataset by name or provide access information for any dataset used in the experiments. |
| Dataset Splits | No | Each DPMyerson configuration is averaged over 50 draws, with revenue evaluated on 10,000 samples. ... The paper mentions '50 draws' and '10,000 samples' for evaluation, but it does not provide specific training/test/validation splits for a dataset. |
| Hardware Specification | No | The paper does not mention any specific hardware (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., libraries, frameworks, or programming languages) used in the experiments. |
| Experiment Setup | Yes | For each value profile, we test various hyperparameters additive discretization (ϵa), quantile discretization (ϵq), and the privacy parameter (ϵp) and select the configuration with the best performance. |