Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise
Authors: Ilias Diakonikolas, Jelena Diakonikolas, Daniel Kane, Puqian Wang, Nikos Zarifis
NeurIPS 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main contribution is a sample near-optimal efficient algorithm for this problem coupled with a matching statistical-computational tradeoff for SQ algorithms and low-degree polynomial tests. |
| Researcher Affiliation | Academia | Ilias Diakonikolas University of Wisconsin, Madison EMAIL Jelena Diakonikolas University of Wisconsin, Madison EMAIL Daniel M. Kane University of California, San Diego EMAIL Puqian Wang University of Wisconsin, Madison EMAIL Nikos Zarifis University of Wisconsin, Madison EMAIL |
| Pseudocode | Yes | Algorithm 1 Main Algorithm; Algorithm 2 Optimization; Algorithm 3 Main Algorithm; Algorithm 4 Initialization; Algorithm 5 Optimization; Algorithm 6 Testing Procedure |
| Open Source Code | No | The paper does not provide an explicit statement about the release of open-source code or a link to a code repository. |
| Open Datasets | No | The paper uses the 'standard Gaussian distribution' as a theoretical assumption for its problem setting, not a specific, publicly available dataset used for training. Therefore, no concrete access information to a public dataset is provided. |
| Dataset Splits | No | This is a theoretical paper and does not involve empirical experiments with real datasets. Therefore, no training/validation/test dataset splits are provided. |
| Hardware Specification | No | This is a theoretical paper and does not describe running experiments. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | This is a theoretical paper and does not describe running experiments. Therefore, no software dependencies with specific version numbers are mentioned. |
| Experiment Setup | No | This is a theoretical paper and does not describe running empirical experiments. Therefore, no experimental setup details, hyperparameters, or training configurations are provided. |