Lower Bounds for Parallel and Randomized Convex Optimization

Authors: Jelena Diakonikolas, Cristóbal Guzmán

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation and in the high-dimensional regime. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Another conceptual contribution of our work is in providing a general and streamlined framework for proving lower bounds in the setting of parallel convex optimization.
Researcher Affiliation Academia Jelena Diakonikolas EMAIL Department of Computer Sciences, University of Wisconsin-Madison 1210 W. Dayton Street Madison, WI 53706, USA Cristóbal Guzmán EMAIL Institute for Mathematical and Computational Engineering, Faculty of Mathematics and School of Engineering, Millennium Nucleus Center for the Discovery of Structures in Complex Data, Pontificia Universidad Católica de Chile Vicuña Mackenna 4860, Santiago, 7820436, Chile
Pseudocode No The paper focuses on presenting theoretical lower bounds, theorems, lemmas, and their proofs. It does not include any sections explicitly labeled as 'Pseudocode' or 'Algorithm', nor does it present structured steps for a method in a code-like format.
Open Source Code No The paper is theoretical, presenting lower bounds and mathematical proofs for convex optimization. It does not describe any methodology for which open-source code would be provided or is relevant.
Open Datasets No The paper is theoretical and focuses on proving lower bounds for parallel and randomized convex optimization. It does not involve any experimental evaluation using datasets, thus no datasets are mentioned as publicly available or open.
Dataset Splits No The paper is theoretical, presenting lower bounds and mathematical proofs for convex optimization. It does not conduct experiments with datasets, and therefore, does not provide any information regarding dataset splits.
Hardware Specification No The paper presents theoretical results and mathematical proofs for lower bounds in convex optimization. It does not describe any experimental setup or the use of specific hardware for running computations.
Software Dependencies No The paper is a theoretical work focusing on mathematical proofs and lower bounds. It does not describe any computational experiments or methodologies that would require specific software dependencies with version numbers.
Experiment Setup No The paper presents theoretical lower bounds for convex optimization. It does not describe any experimental setup, hyperparameters, training configurations, or system-level settings, as no experiments were conducted.