We consider online statistical inference of constrained stochastic nonlinear optimization problems. We apply the Stochastic Sequential Quadratic Programming (StoSQP) method to solve these problems, which can be regarded as applying second-order Newton's method to the Karush-Kuhn-Tucker (KKT) conditions. In each iteration, the StoSQP method computes the Newton direction by solving a quadratic program, and then selects a proper adaptive stepsize $\bar{\alpha}_t$ to update the primal-dual iterate. To reduce dominant computational cost of the method, we inexactly solve the quadratic program in each iteration by employing an iterative sketching solver. Notably, the approximation error of the sketching solver need not vanish as iterations proceed, meaning that the per-iteration computational cost does not blow up. For the above StoSQP method, we show that under mild assumptions, the rescaled primal-dual sequence $1/\sqrt{\bar{\alpha}_t}\cdot (x_t - x^\star, \lambda_t - \lambda^\star)$ converges to a mean-zero Gaussian distribution with a nontrivial covariance matrix depending on the underlying sketching distribution. To perform inference in practice, we also analyze a plug-in covariance matrix estimator. We illustrate the asymptotic normality result of the method both on benchmark nonlinear problems in CUTEst test set and on linearly/nonlinearly constrained regression problems.
翻译:我们考虑约束随机非线性优化问题的在线统计推断。我们采用随机序列二次规划(StoSQP)方法求解此类问题,该方法可视为将二阶牛顿法应用于Karush-Kuhn-Tucker(KKT)条件。每轮迭代中,StoSQP方法通过求解二次规划计算牛顿方向,随后选取合适的自适应步长$\bar{\alpha}_t$更新原始-对偶迭代变量。为降低该方法的主要计算开销,我们每轮迭代通过采用迭代草图求解器非精确求解二次规划。值得注意的是,草图求解器的近似误差无需随迭代进行而消失,这意味着每轮迭代的计算成本不会膨胀。针对上述StoSQP方法,我们证明在温和假设条件下,缩放后的原始-对偶序列$1/\sqrt{\bar{\alpha}_t}\cdot (x_t - x^\star, \lambda_t - \lambda^\star)$收敛到均值为零的高斯分布,其协方差矩阵非平凡且取决于底层草图分布。为在实践中进行推断,我们还分析了插入式协方差矩阵估计量。我们在CUTEst测试集的基准非线性问题以及线性/非线性约束回归问题上,展示了该方法的渐近正态性结果。