New Insights for the Multivariate Square-Root Lasso
Authors: Aaron J. Molstad
JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In both simulation studies and a genomic data application, we show that the multivariate square-root lasso can outperform more computationally intensive methods that require explicit estimation of the error precision matrix. |
| Researcher Affiliation | Academia | Aaron J. Molstad EMAIL Department of Statistics and Genetics Institute University of Florida Gainesville, FL 32611, USA |
| Pseudocode | Yes | Algorithm 1: Prox-linear ADMM algorithm for (2) |
| Open Source Code | Yes | An R package implementing our method is available for download at https://github.com/ajmolstad/MSRL. |
| Open Datasets | Yes | We used our method to model the linear relationship between micro RNA expression and gene expression in patients with glioblastoma multiforme an aggressive brain cancer collected by The Cancer Genome Atlas program (TCGA, Weinstein et al. (2013)). |
| Dataset Splits | Yes | For one hundred independent replications, we randomly split the data into training and testing sets of size 250 and 147, respectively. ... For MSR-CV and PLS, tuning parameters are selected by five-fold crossvalidation minimizing squared prediction error averaged over all responses. |
| Hardware Specification | No | The paper discusses computing times for various algorithms but does not specify the hardware (e.g., CPU, GPU, memory) on which these computations were performed. It only mentions an R package is available. |
| Software Dependencies | No | The paper mentions using an "R package" for its implementation and references "CVX (Grant and Boyd, 2014)" and the "R package camel" for comparisons. However, it does not provide specific version numbers for its own R package dependencies or the R environment itself. |
| Experiment Setup | Yes | Our default implementation sets ϵrel = 10 4 and ϵabs = 10 10. We also adaptively update the step size ρ. Unlike the scheme originally proposed in Boyd et al. (2011), we update ρ every κth iteration using ρ ρ 1(r(k+1) > 10s(k+1)) 0.5 1(s(k+1) > 10r(k+1)) + 1 . In our default implementation, we use κ = 10. |