Differentially Private Covariance Estimation
Authors: Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii
NeurIPS 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results demonstrate lower reconstruction error for our algorithm when compared to other methods on both simulated and real-world datasets. and Finally, we perform an empirical evaluation of our algorithm, comparing it to existing methods on both synthetic and real-world datasets (Section 4). |
| Researcher Affiliation | Collaboration | Kareem Amin EMAIL Google Research NY Travis Dick EMAIL Carnegie Mellon University Alex Kulesza EMAIL Google Research NY Andr es Mu noz Medina EMAIL Google Research NY Sergei Vassilvitskii EMAIL Google Research NY |
| Pseudocode | Yes | Pseudocode for our method is given in Algorithm 1. and Pseudocode for their method is given in Algorithm 2 in the appendix. |
| Open Source Code | No | The paper does not provide any explicit statements or links for open-source code for the described methodology. |
| Open Datasets | Yes | We measure the performance of our algorithm on three different datasets: Wine, Adult, and Airfoil from the UCI repository2, These datasets have dimensions ranging from 13 to 108, and number of points from 200 to 49,000. 2https://archive.ics.uci.edu/ml/datasets/ |
| Dataset Splits | No | The paper uses datasets from the UCI repository but does not explicitly provide details about train/validation/test splits, proportions, or specific methods for creating these splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions) needed to replicate the experiments. |
| Experiment Setup | Yes | We run each algorithm with privacy parameter 2 {0.01, 0.1, 0.2, 0.5, 1.0, 2.0, 4.0}. For the Gaussian mechanism, we also varied the parameter δ 2 {1e 16, 1e 10, 1e 3} We ran each experiment 50 times, showing the average error in Figure 1. |