Sparse Convex Optimization via Adaptively Regularized Hard Thresholding
Authors: Kyriakos Axiotis, Maxim Sviridenko
JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we evaluate the training performance of different algorithms in the tasks of Linear Regression and Logistic Regression. ... The results are presented in Figures 1, 2, 3, 4. Also, in Figure 5 we present a runtime comparison between ARHT and Exhaustive Local Search in the year and census datasets. |
| Researcher Affiliation | Collaboration | Kyriakos Axiotis EMAIL Computer Science & Artificial Intelligence Laboratory (CSAIL) Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA Maxim Sviridenko EMAIL Yahoo! Research 770 Broadway, New York, NY 10003, USA |
| Pseudocode | Yes | Algorithm 1 Iterative Hard Thresholding (IHT), Algorithm 2 Greedy/OMP/Fwd stepwise selection, Algorithm 3 Orthogonal Matching Pursuit with Replacement, Algorithm 4 Exhaustive Local Search, Algorithm 5 Adaptively Regularized Hard Thresholding core routine, Algorithm 6 Adaptively Regularized Hard Thresholding |
| Open Source Code | No | The paper does not contain an explicit statement from the authors about releasing their code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We run our experiments on publicly available regression and binary classification data sets... The data sets can be downloaded here. |
| Dataset Splits | No | The paper uses publicly available datasets but does not explicitly provide details about training, validation, or test splits, such as percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only discusses runtime performance generally. |
| Software Dependencies | No | The code has been implemented in python3, with libraries numpy, sklearn, and scipy. (Lacks specific version numbers for the libraries). |
| Experiment Setup | Yes | For ARHT, we used a fixed number of 20 iterations at Line 5 of Algorithm 6. In Line 19 of Algorithm 5 we slightly weaken the progress condition to g Rt(xt) g Rt(xt+1) < 10^-3 / s * (g Rt(xt) opt). ... For Logistic Regression we used an LBFGS solver with 1000 iterations. The LASSO solver we used is Lasso from sklearn.linear model with 1000 iterations. |