Bilevel optimization, with broad applications in machine learning, has an intricate hierarchical structure. Gradient-based methods have emerged as a common approach to large-scale bilevel problems. However, the computation of the hyper-gradient, which involves a Hessian inverse vector product, confines the efficiency and is regarded as a bottleneck. To circumvent the inverse, we construct a sequence of low-dimensional approximate Krylov subspaces with the aid of the Lanczos process. As a result, the constructed subspace is able to dynamically and incrementally approximate the Hessian inverse vector product with less effort and thus leads to a favorable estimate of the hyper-gradient. Moreover, we propose a provable subspace-based framework for bilevel problems where one central step is to solve a small-size tridiagonal linear system. To the best of our knowledge, this is the first time that subspace techniques are incorporated into bilevel optimization. This successful trial not only enjoys $\mathcal{O}(\epsilon^{-1})$ convergence rate but also demonstrates efficiency in a synthetic problem and two deep learning tasks.
翻译:双层级优化在机器学习中具有广泛应用,其结构具有复杂的层次性。基于梯度的方法已成为大规模双层级问题的常用求解途径。然而,超梯度的计算涉及Hessian逆矩阵与向量的乘积,这限制了算法效率并被视为计算瓶颈。为规避矩阵求逆运算,我们借助Lanczos过程构建了一系列低维近似Krylov子空间。由此构造的子空间能够以较低计算代价动态且增量地逼近Hessian逆矩阵向量积,从而获得精确的超梯度估计。此外,我们提出一个可证明收敛的基于子空间的双层级优化框架,其核心步骤是求解小规模三对角线性方程组。据我们所知,这是子空间技术首次被引入双层级优化领域。该成功尝试不仅具有$\mathcal{O}(\epsilon^{-1})$收敛速率,在合成问题及两个深度学习任务中也展现出卓越的计算效率。