Loss Minimization and Parameter Estimation with Heavy Tails

Authors: Daniel Hsu, Sivan Sabato

JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental All of the techniques we have developed in this work are simple enough to implement and empirically evaluate, and indeed in some simulated experiments, we have verified the improvements over standard methods such as the empirical mean when the data follow heavy-tailed distributions.
Researcher Affiliation Academia Daniel Hsu EMAIL Department of Computer Science Columbia University New York, NY 10027, USA Sivan Sabato EMAIL Department of Computer Science Ben-Gurion University of the Negev Beer-Sheva 8410501, Israel
Pseudocode Yes Algorithm 1 Median-of-means estimator; Algorithm 2 Robust approximation; Algorithm 3 Robust approximation with random distances; Algorithm 4 Regression for heavy-tails
Open Source Code No The paper does not provide any explicit statements about the release of source code, nor does it provide links to any code repositories or mention code in supplementary materials.
Open Datasets No The paper discusses theoretical applications and simulated experiments but does not provide specific details, links, or citations for any publicly available or open datasets used in those simulations.
Dataset Splits No The paper does not describe the use of any specific datasets for experimental evaluation, and therefore does not provide information on dataset splits for training, validation, or testing.
Hardware Specification No The paper mentions simulated experiments but does not provide any specific details about the hardware specifications (e.g., GPU/CPU models, memory) used for running these experiments.
Software Dependencies No The paper does not specify any particular software, libraries, or their version numbers that are required to implement or reproduce the described methodologies.
Experiment Setup No The paper focuses on theoretical derivations and algorithms. While it provides theoretical conditions and sample size requirements for its methods, it does not include specific experimental setup details such as hyperparameter values, model initialization, or training schedules for any empirical evaluation.