On Proper Learnability between Average- and Worst-case Robustness
Authors: Vinod Raman, UNIQUE SUBEDI, Ambuj Tewari
NeurIPS 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we study relaxations of the worst-case adversarially robust learning setup from a learning-theoretic standpoint. |
| Researcher Affiliation | Academia | Vinod Raman Department of Statistics University of Michigan Ann Arbor, MI 48104 EMAIL Unique Subedi Department of Statistics University of Michigan Ann Arbor, MI 48104 EMAIL Ambuj Tewari Department of Statistics University of Michigan Ann Arbor, MI 48104 EMAIL |
| Pseudocode | No | The paper is theoretical and does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on specific datasets. It refers to abstract concepts like 'distribution D over X Y' rather than named datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical validation with dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any computational experiments that would require specific hardware. |
| Software Dependencies | No | The paper is theoretical and does not describe any computational experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. |