Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization

Authors: Yiyang Lu, Mohammad Pedramfar, Vaneet Aggarwal

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce a novel framework for decentralized projection-free optimization, extending projection-free methods to a broader class of upper-linearizable functions. Our approach leverages decentralized optimization techniques with the flexibility of upper-linearizable function frameworks, effectively generalizing optimization of traditional DR-submodular functions, which captures the diminishing return property. We obtain the regret of O(T 1 θ/2) with communication complexity of O(T θ) and number of linear optimization oracle calls of O(T 2θ) for decentralized upper-linearizable function optimization, for any 0 θ 1. This approach allows for the first results for monotone up-concave optimization with general convex constraints and non-monotone up-concave optimization with general convex constraints. Further, the above results for first order feedback are extended to zeroth order, semi-bandit, and bandit feedback. All detailed proofs are deferred to the appendices.
Researcher Affiliation Academia Yiyang Lu EMAIL Purdue University, West Lafayette, IN, USA Mohammad Pedramfar EMAIL Mila Quebec AI Institute/Mc Gill University, Montreal, QC, Canada Vaneet Aggarwal EMAIL Purdue University, West Lafayette, IN, USA
Pseudocode Yes Algorithm 1 Decent Ralized Online Continuous Upper-Linearizable Optimization DROCULO Algorithm 2 Bandit Algorithm for Case B.1 Algorithm 3 Semi-Bandit Algorithm for Case B.2 and B.3 Algorithm 4 Zeroth-order Full-information Algorithm for Case B.2 and B.3 Algorithm 5 Bandit Algorithm for Case B.2 and B.3
Open Source Code No The paper does not contain any explicit statements or links indicating the availability of open-source code for the described methodology. The link provided (https: // openreview. net/ forum? id= b Z5WD2HUQr) is for the open review process, not for code.
Open Datasets No The paper introduces a theoretical framework for decentralized projection-free online optimization and does not conduct experiments on specific datasets. Therefore, it does not mention the use or availability of open datasets.
Dataset Splits No The paper is theoretical and does not involve empirical evaluation using datasets, thus no dataset splits are mentioned.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or empirical results, therefore no hardware specifications are provided.
Software Dependencies No The paper is theoretical, presenting algorithms and proofs without detailing any software implementations or dependencies for experimental purposes.
Experiment Setup No The paper focuses on theoretical contributions and algorithmic design, providing mathematical guarantees for proposed methods. It does not include an experimental section detailing specific setup, hyperparameters, or training configurations.