Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Robust high dimensional learning for Lipschitz and convex losses
Authors: Chinot Geoffrey, Lecué Guillaume, Lerasle Matthieu
JMLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A short simulation study illustrating our theoretical findings is presented in Section 7. We illustrate this principle in a Simulations section where a minmax MOM version of classical proximal descent algorithms are turned into robust to outliers algorithms. |
| Researcher Affiliation | Academia | Chinot Geoffrey EMAIL Department of Statistics ETH Zurich... Lecu e Guillaume EMAIL Department of Statistics ENSAE CREST... Lerasle Matthieu EMAIL Department of Statistics ENSAE CREST |
| Pseudocode | Yes | Algorithm 1: Proximal Descent-Ascent gradient method with median blocks. Algorithm 2: Proximal gradient descent algorithm. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or provide a link to a code repository. It describes algorithms and methods but does not offer concrete access to its implementation. |
| Open Datasets | No | First framework: X is a standard Gaussian random vector in Rp and ζ is a real-valued standard Gaussian variable independent of X with variance σ2. This describes synthetic data generation, not the use of a publicly available dataset. |
| Dataset Splits | No | The error rate is the proportion of misclassification on a test dataset. However, no specific details like percentages, sample counts, or methodology for the splits are given. |
| Hardware Specification | No | The paper does not mention any specific hardware used for running the simulations. The 'Simulations' section (Section 7) describes the algorithms and data generation but omits hardware specifications. |
| Software Dependencies | No | The paper does not explicitly state any specific software or library names with version numbers used for the implementation or experiments. |
| Experiment Setup | No | The number of blocks K is chosen by MOM cross-validation (see Lecu e and Lerasle (2017b) for more precision on that procedure). The sequences of stepsizes are constant along the algorithm (ηt)t := η and ( ηt)t = η and are also chosen by MOM cross-validation. While N, p, s are given (e.g., N = 1000, p = 400 and s = 30), specific resulting values for hyperparameters like λ, K, or η are not provided. |