Binary Classification under Local Label Differential Privacy Using Randomized Response Mechanisms
Authors: Shirong Xu, Chendi Wang, Will Wei Sun, Guang Cheng
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical results are validated by extensive simulated examples and two real applications. |
| Researcher Affiliation | Academia | Shirong Xu EMAIL Department of Statistics and Data Science University of California, Los Angeles Chendi Wang EMAIL Department of Statistics and Data Science University of Pennsylvania Will Wei Sun EMAIL Daniels School of Business Purdue University Guang Cheng EMAIL Department of Statistics and Data Science University of California, Los Angeles |
| Pseudocode | No | The paper describes methods and theoretical results using mathematical notation and textual explanations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about making source code publicly available, nor does it provide links to code repositories. |
| Open Datasets | Yes | This experiment considers similar settings of the privacy parameter ϵ as Section 5.2 in order to verify our theoretical findings of DNN on the MNIST dataset (Le Cun, 1998). |
| Dataset Splits | Yes | The resultant training dataset contains 12,089 training samples and 2,042 testing samples. |
| Hardware Specification | No | The paper mentions training models but does not specify any particular hardware components like GPU models, CPU types, or memory specifications used for the experiments. |
| Software Dependencies | No | The overall training process of the neural network is implemented in Tensorflow (Abadi et al., 2016) with the Adam optimizer and learning rate being 0.001. Additionally, we employ the early-stopping technique to monitor the training error with patience 10 and maintain the parameter with the smallest training error. |
| Experiment Setup | Yes | The overall training process of the neural network is implemented in Tensorflow (Abadi et al., 2016) with the Adam optimizer and learning rate being 0.001. Additionally, we employ the early-stopping technique to monitor the training error with patience 10 and maintain the parameter with the smallest training error. |