Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Online Learning via the Differential Privacy Lens
Authors: Jacob D. Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings. The differential privacy framework is, at heart, less about privacy and more about algorithmic stability, and thus has found application in domains well beyond those where information security is central. Here we develop an algorithmic property called one-step differential stability which facilitates a more reο¬ned regret analysis for online learning methods. We show that tools from the differential privacy literature can yield regret bounds for many interesting online learning problems including online convex optimization and online linear optimization. |
| Researcher Affiliation | Collaboration | Jacob Abernethy College of Computing Georgia Institute of Technology EMAIL Young Hun Jung Department of Statistics University of Michigan EMAIL Chansoo Lee Google Brain EMAIL Audra Mc Millan Simons Inst. for the Theory of Computing Department of Computer Science Boston University Khoury College of Computer Sciences Northeastern University EMAIL Ambuj Tewari Department of Statistics Department of EECS University of Michigan EMAIL |
| Pseudocode | Yes | Algorithm 1 Online convex optimization using Obj-Pert by Kifer et al. [23] Algorithm 2 Gradient-Based Prediction Algorithm (GBPA) for experts problem Algorithm 3 GBPA for bandits with experts problem |
| Open Source Code | No | The paper does not provide any links or explicit statements about the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not use or make publicly available any specific datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not describe any dataset splits for validation. |
| Hardware Specification | No | The paper focuses on theoretical contributions and does not describe any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe experimental setups with hyperparameters or training configurations. |