Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Contaminated Online Convex Optimization

Authors: Tomoya Kamijima, Shinji Ito

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In online convex optimization, some efficient algorithms have been designed for each of the individual classes of objective functions, e.g., convex, strongly convex, and exp-concave. ... This paper introduces a novel approach to address such cases, proposing a new regime we term as contaminated online convex optimization. For the contaminated case, we demonstrate that the regret is lower bounded by Ω(log T + k). ... We also demonstrate that the regret is bounded by O(log T + k log T) when universal algorithms are used. When our proposed algorithms with additional information are employed, the regret is bounded by O(log T + k), which matches the lower bound. ... In this section, we explain experimental results. We compare the performances of 5 OCO algorithms; OGD with stepsizes ηt = D/(G t), ONS, Meta Grad, Algorithm 1 with S1 = (Con-ONS), and Algorithm 1 with S2 = (Con-OGD).
Researcher Affiliation Collaboration Tomoya Kamijima EMAIL Department of Mathematical Informatics The University of Tokyo Shinji Ito EMAIL Department of Mathematical Informatics The University of Tokyo NEC Corporation (affiliation upon submission) RIKEN AIP
Pseudocode Yes Algorithm 1 Algorithm using additional information Input: convex set X Rd, x1 X, T, D, G, α, λ Algorithm 2 Online Gradient Descent (Zinkevich, 2003) Algorithm 3 Online Newton step (Hazan et al., 2006) Algorithm 4 Meta Grad Master (Van Erven & Koolen, 2016) Algorithm 5 Meta Grad Slave (Van Erven & Koolen, 2016) Algorithm 6 Maler Meta (Wang et al., 2020) Algorithm 7 Maler Convex Expert (Wang et al., 2020) Algorithm 8 Maler Exp-concave Expert (Wang et al., 2020) Algorithm 9 Maler Strongly Convex Expert (Wang et al., 2020) Algorithm 10 Universal Strategy for Online Convex Optimization (USC) (Zhang et al., 2022)
Open Source Code No The paper does not explicitly state that source code for the methodology is openly available, nor does it provide any links to a code repository. It mentions "All the experiments are implemented in Python 3.9.2" but this is not a statement of code release.
Open Datasets No Example 3.7. (Online least mean square regressions) Consider the situation where a batch of input-output data (at,i, bt,i) Rd R (i {1, 2, . . . , n}) is given in each round t... In this experiment, each component of the vector at,i is taken from a uniform distribution on [1, 2] independently. We set X = [0, 1]d and assume there exists an optimal solution x which is taken from a uniform distribution on X, i.e., we take bt,i = at,i, x . The paper describes synthetic data generation and does not use or provide access to any publicly available datasets.
Dataset Splits No The paper describes experiments using synthetically generated data but does not specify any training/test/validation splits, as the experiments are online learning scenarios where data is revealed sequentially. There are no traditional dataset splits mentioned.
Hardware Specification Yes All the experiments are implemented in Python 3.9.2 on a Mac Book Air whose processor is 1.8 GHz dual-core Intel Core i5 and memory is 8GB.
Software Dependencies Yes All the experiments are implemented in Python 3.9.2 on a Mac Book Air whose processor is 1.8 GHz dual-core Intel Core i5 and memory is 8GB.
Experiment Setup Yes Table 2: Parameter setting in Experiment 1. x1 x T k D G α 0 0.02 1000 250 1 100 1 Table 3: Parameter setting in Experiment 2. x1 x T k D G λ 0 2/3 1000 250 1 1 1 Table 4: Parameter setting in Experiment 3. x1 n d T k D 0 10 5 1000 250 The parameters of ONS are set as γ = 0.005 and ε = 1/(γ2D2) = 40000. We repeat this numerical experiment in 100 different random seeds and calculate their mean and standard deviation.