Foolish Crowds Support Benign Overfitting

Authors: Niladri S. Chatterji, Philip M. Long

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove a lower bound on the excess risk of sparse interpolating procedures for linear regression with Gaussian data in the overparameterized regime. We apply this result to obtain a lower bound for basis pursuit... Our analysis exposes the benefit of an effect analogous to the wisdom of the crowd... This section is devoting to proving Theorem 1, so the assumptions of Theorem 1 are in scope throughout this section.
Researcher Affiliation Collaboration Niladri S. Chatterji EMAIL Computer Science Department, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305. Philip M. Long EMAIL Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It focuses on mathematical proofs and definitions.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described. It mentions a preliminary version posted on arXiv, but not a code release.
Open Datasets No The paper describes a synthetic data generation process using Gaussian distributions: 'X Rn p is a matrix whose rows are i.i.d. draws from N(0, diag(...))' and 'y = Xθ + ξ'. It does not use or provide access information for any pre-existing public or open datasets.
Dataset Splits No The paper focuses on theoretical analysis using data generated according to a specified distribution (e.g., Gaussian data). It does not describe experiments that would require explicit training/test/validation dataset splits from a pre-existing dataset.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware specifications.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on proving lower bounds for algorithms. It does not describe an experimental setup with hyperparameters, training configurations, or system-level settings.