Approximation Hardness for A Class of Sparse Optimization Problems

Authors: Yichen Chen, Yinyu Ye, Mengdi Wang

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an O(nc1dc2)-optimal solution to an n d problem is strongly NP-hard for any c1, c2 [0, 1) such that c1 + c2 < 1. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P = NP.
Researcher Affiliation Academia Yichen Chen EMAIL Department of Computer Science, Princeton University Princeton, NJ 08544, USA Yinyu Ye EMAIL Department of Management Science and Engineering, Stanford University Stanford, CA 94304, USA Mengdi Wang EMAIL Department of Operations Research and Financial Engineering, Princeton University Princeton, NJ 08544, USA
Pseudocode No The paper includes a 'Technical Proofs' section (Section 6) with mathematical derivations and lemmas, but no explicit pseudocode blocks or algorithms are presented.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology, nor does it provide links to a code repository.
Open Datasets No The paper is theoretical, focusing on the computational complexity of optimization problems with abstract 'input data' A and b. It does not use or refer to any specific publicly available datasets for experimental evaluation.
Dataset Splits No The paper is theoretical and does not describe experiments using datasets, therefore, there is no mention of training/test/validation dataset splits.
Hardware Specification No The paper is theoretical and focuses on computational complexity proofs. It does not describe any experimental setup or hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not describe any experimental implementation details, thus no specific software dependencies or versions are mentioned.
Experiment Setup No The paper is theoretical and focuses on approximation hardness proofs. It does not detail any experimental setup, hyperparameters, or training configurations.