Interpolating Predictors in High-Dimensional Factor Regression

Authors: Florentina Bunea, Seth Strimas-Mackey, Marten Wegkamp

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1 illustrates the risk behavior proved in Theorem 16. Note the descent towards zero in the regime γ := p/n > 1. For completeness, we also provide a bound on the risk R(bα) for the low-dimensional case p < n, under model (5), in Appendix D.3. ... Figure 2 plots the excess prediction risk of the GLS and PCR predictors. We also include the excess prediction risks of the LASSO, Ridge regression, and the null estimator 0 in this figure for comparison. The tuning parameters for LASSO and Ridge regression were chosen by cross-validation. ... We plot the coefficients of α in Figure 3 for the case p = 7215 and K = 69. ... Figure 4 plots the excess risk of the GLS and other predictors for these model settings.
Researcher Affiliation Academia Florentina Bunea EMAIL Seth Strimas-Mackey EMAIL Department of Statistics and Data Science Cornell University Ithaca, NY 14850, USA Marten Wegkamp EMAIL Department of Mathematics and Department of Statistics and Data Science Cornell University Ithaca, NY 14850, USA
Pseudocode No The paper describes methods and proofs mathematically. There are no explicitly labeled pseudocode blocks or algorithm sections.
Open Source Code No The paper does not contain any statements about making code available, nor does it provide links to code repositories.
Open Datasets No The paper uses synthetic data for simulations to illustrate theoretical results. For example, in Section 4.4, it states: "Here K increases linearly from 16 to 64, n = K1.5 and thus increases from 64 to 512, and p increases from 33 to 4066. Further, ΣE = Ip, ΣZ = IK, β = (1, . . . , 1) , and A = p VK, where VK is generated by taking the first K rows of a randomly generated p × p orthogonal matrix V." No existing public datasets are used.
Dataset Splits No The paper uses simulated data for its figures and illustrations. It defines the parameters for generating this synthetic data but does not discuss train/test/validation splits, as these are typically associated with real-world datasets for empirical evaluation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the simulations or experiments.
Software Dependencies No The paper mentions simulations for figures, but it does not specify any software dependencies or versions (e.g., programming languages, libraries, frameworks with version numbers).
Experiment Setup Yes Figure 1 illustrates the risk behavior proved in Theorem 16. Note the descent towards zero in the regime γ := p/n > 1. Here K increases linearly from 16 to 64, n = K1.5 and thus increases from 64 to 512, and p increases from 33 to 4066. Further, ΣE = Ip, ΣZ = IK, β = (1, . . . , 1) , and A = p VK, where VK is generated by taking the first K rows of a randomly generated p × p orthogonal matrix V. ... The tuning parameters for LASSO and Ridge regression were chosen by cross-validation.