Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming

Authors: Sen Na, Michael Mahoney

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the asymptotic normality result of the method both on benchmark nonlinear problems in CUTEst test set and on linearly/nonlinearly constrained regression problems. ... We apply AI-Sto SQP to both benchmark constrained nonlinear optimization problems in CUTEst set (Gould et al., 2014) and to linearly/ nonlinearly constrained regression problems. ... The results are summarized in Table 1.
Researcher Affiliation Academia Sen Na EMAIL H. Milton Stewart School of Industrial & Systems Engineering Georgia Institute of Technology Atlanta, GA 30332, USA; Michael W. Mahoney EMAIL ICSI and Department of Statistics University of California Berkeley, CA 94720, USA
Pseudocode Yes We combine the above three steps and summarize AI-Sto SQP in Algorithm 1.
Open Source Code No The paper does not contain an explicit statement or link providing access to source code for the methodology described.
Open Datasets Yes We illustrate our results on benchmark nonlinear problems in CUTEst test set and on linearly/nonlinearly constrained regression problems. ... We apply AI-Sto SQP to both benchmark constrained nonlinear optimization problems in CUTEst set (Gould et al., 2014) and to linearly/ nonlinearly constrained regression problems.
Dataset Splits No For the regression problems, the paper describes how data was *generated* (e.g., 'our method randomly samples a covariate ξa N(0, 5I + Σa) at each step') rather than using predefined splits of an existing dataset. For the CUTEst problems, these are benchmark problems/test functions, not datasets in the typical train/test split sense. Therefore, specific dataset split information is not applicable or provided.
Hardware Specification No The paper discusses computational efficiency in terms of 'flops per iteration' and computational cost, but does not specify any particular hardware components like CPU or GPU models used for running experiments.
Software Dependencies No The paper mentions using the 'IPOPT solver (Wächter and Biegler, 2006)' for solving benchmark problems, but does not provide a specific version number for it or any other ancillary software components used for the implementation or analysis of the proposed method.
Experiment Setup Yes We perform 105 iterations and, at each step, we perform τ = 40 randomized Kaczmarz steps to approximately solve QPs. ... We vary σ2 {10 4, 10 2, 10 1, 1} and let βt = 1/t0.501 (power slightly larger than 0.5) and χt = β2 t . We randomly choose αt Uniform([βt, ηt]) with ηt = βt+χt. ... For each case (regression model + constraint type), we vary the parameter dimension d {5, 20, 40, 60}, and the true solution x is linearly spaced between 0 and 1. For each d, our method randomly samples a covariate ξa N(0, 5I + Σa) at each step, with three different choices of Σa... For logistic models, we regularize the loss by a quadratic penalty with unit parameter.