Stochastic Optimization under Distributional Drift

Authors: Joshua Cutler, Dmitriy Drusvyatskiy, Zaid Harchaoui

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments illustrate our results.
Researcher Affiliation Academia Joshua Cutler EMAIL Department of Mathematics University of Washington Seattle, WA 98195-4322, USA Dmitriy Drusvyatskiy EMAIL Department of Mathematics University of Washington Seattle, WA 98195-4322, USA Zaid Harchaoui EMAIL Department of Statistics University of Washington Seattle, WA 98195-4322, USA
Pseudocode Yes Algorithm 1 Online Proximal Stochastic Gradient PSG(x0, {ηt}, T) ... Algorithm 2 Averaged Online Proximal Stochastic Gradient PSG(x0, µ, {ηt}, T) ... Algorithm 3 Decision-Dependent PSG D-PSG(x0, {ηt}, T) ... Algorithm 4 Averaged Decision-Dependent PSG D-PSG(x0, µ, γ, {ηt}, T)
Open Source Code Yes Code is available online at https://github.com/joshuacutler/Time Drift Experiments.
Open Datasets No We investigate the empirical behavior of our finite-time bounds on numerical examples with synthetic data.
Dataset Splits No To estimate the expected values and confidence intervals of xt x t 2 and ϕt(ˆxt) ϕ t , we run 100 trials with horizon T = 100.
Hardware Specification No The paper describes numerical experiments and provides parameter values, but it does not specify any particular hardware used for running these experiments.
Software Dependencies No Code is available online at https://github.com/joshuacutler/Time Drift Experiments. However, the paper does not specify software dependencies with version numbers.
Experiment Setup Yes In our simulations, we set d = 50, n = 100, and Σt = (σ2/n L)In for all t, where In denotes the n n identity matrix. We initialize x0 and x 0 using standard Gaussian entries and generate A via singular value decomposition with Haar-distributed orthogonal matrices. In Figures 1 and 2, we use default parameter values µ = L = 1, σ = 10, = 1, and the corresponding asymptotically optimal step size η = η .