Universal Online Convex Optimization Meets Second-order Bounds

Authors: Lijun Zhang, Yibo Wang, Guanghui Wang, Jinfeng Yi, Tianbao Yang

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations. The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses to aggregate predictions from experts. Specifically, the meta-algorithm is required to yield a second-order bound with excess losses, so that it can leverage strong convexity and exponential concavity to control the meta-regret. In this way, our strategy inherits the theoretical guarantee of any expert designed for strongly convex functions and exponentially concave functions, up to a double logarithmic factor. As a result, we can plug in off-the-shelf online solvers as blackbox experts to deliver problem-dependent regret bounds. For general convex functions, it maintains the minimax optimality and also achieves a small-loss bound. Furthermore, we extend our universal strategy to online composite optimization, where the loss function comprises a time-varying function and a fixed regularizer. To deal with the composite loss functions, we employ a meta-algorithm based on the optimistic online learning framework, which not only enjoys a second-order bound, but also can utilize estimations for upcoming loss functions. With suitable configurations, we show that the additional regularizer does not contribute to the meta-regret, thus ensuring the universality in the composite setting.
Researcher Affiliation Collaboration Lijun Zhang EMAIL Yibo Wang EMAIL National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China School of Artificial Intelligence, Nanjing University, Nanjing, China Guanghui Wang EMAIL College of Computing, Georgia Institute of Technology, Atlanta, USA Jinfeng Yi EMAIL JD AI Research, Beijing, China Tianbao Yang EMAIL Department of Computer Science and Engineering, Texas A&M University, College Station, USA
Pseudocode Yes Algorithm 1 A Universal Strategy for Online Convex Optimization (USC) Algorithm 2 A Universal Strategy for Online Composite Optimization (USC-Comp)
Open Source Code No No explicit statement about code release or links to source code repositories for the methodology described in this paper were found.
Open Datasets No The paper is theoretical and focuses on algorithm design and regret bounds for online convex optimization. It does not involve experimental evaluation on specific datasets.
Dataset Splits No The paper is theoretical and does not present experimental results requiring dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or hardware used.
Software Dependencies No The paper is theoretical and does not mention any software implementation details or dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup or specific hyperparameters.