Unconstrained Robust Online Convex Optimization
Authors: Jiujia Zhang, Ashok Cutkosky
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper addresses online learning with corrupted feedback. Our learner is provided with potentially corrupted gradients gt instead of the true gradients gt. We make no assumptions about how the corruptions arise: they could be the result of outliers, mislabeled data, or even malicious interference. We focus on the difficult unconstrained setting in which our algorithm must maintain low regret with respect to any comparison point u Rd. The unconstrained setting is significantly more challenging as existing algorithms suffer extremely high regret even with very tiny amounts of corruption (which is not true in the case of a bounded domain). Our algorithms guarantee regret u G( T + k) when G maxt gt is known, where k is a measure of the total amount of corruption. When G is unknown we incur an extra additive penalty of ( u 2 + G2)k. ... This paper advances the theoretical understanding of online convex optimization, contributing to the mathematical foundations of machine learning. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Boston University, Boston, USA. Correspondence to: Jiujia Zhang <EMAIL>, Ashok Cutkosky <EMAIL>. |
| Pseudocode | Yes | We summarize the general procedure as Algorithm 1. ... Algorithm 1 General Protocol ... Algorithm 2 Robust Online Learning By Exploiting Linear Offset ... Algorithm 3 FILTER: k-lag Thresholding and Gradient Clipping ... Algorithm 4 TRACKER: Track the Magnitude of wt ... Algorithm 5 Robust Online Learning By Exploiting In-Time Offset |
| Open Source Code | No | Impact Statement: This paper advances the theoretical understanding of online convex optimization, contributing to the mathematical foundations of machine learning. As it does not involve deployable systems or datasets, the broader societal and ethical implications are indirect, and no specific concerns need to be highlighted. |
| Open Datasets | No | The paper is theoretical, developing algorithms and regret bounds for online convex optimization. It does not conduct experiments on specific datasets. The impact statement explicitly mentions 'it does not involve deployable systems or datasets'. |
| Dataset Splits | No | The paper is theoretical and does not use any specific datasets for empirical evaluation, therefore, no dataset splits are discussed. |
| Hardware Specification | No | The paper is theoretical and focuses on algorithm design and regret guarantees. It does not describe any empirical experiments that would require specific hardware, thus no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical, presenting algorithms and mathematical proofs. It does not describe any implemented system or empirical experiments, and therefore no specific software dependencies or versions are mentioned. |
| Experiment Setup | No | The paper is primarily theoretical, focusing on algorithm design and regret bounds for online convex optimization. While Figure 1 illustrates a simulation, it does not provide a detailed experimental setup with specific hyperparameters, training configurations, or system-level settings typically found in empirical studies. |