In this work, we investigate the use of data-driven equation discovery for dynamical systems to model and forecast continuous-time dynamics of unconstrained optimization problems. To avoid expensive evaluations of the objective function and its gradient, we leverage trajectory data on the optimization variables to learn the continuous-time dynamics associated with gradient descent, Newton's method, and ADAM optimization. The discovered gradient flows are then solved as a surrogate for the original optimization problem. To this end, we introduce the Learned Gradient Flow (LGF) optimizer, which is equipped to build surrogate models of variable polynomial order in full- or reduced-dimensional spaces at user-defined intervals in the optimization process. We demonstrate the efficacy of this approach on several standard problems from engineering mechanics and scientific machine learning, including two inverse problems, structural topology optimization, and two forward solves with different discretizations. Our results suggest that the learned gradient flows can significantly expedite convergence by capturing critical features of the optimization trajectory while avoiding expensive evaluations of the objective and its gradient.
翻译:在本工作中,我们研究了利用数据驱动的动力系统方程发现方法,对无约束优化问题的连续时间动力学进行建模与预测。为避免目标函数及其梯度的高昂计算代价,我们利用优化变量的轨迹数据来学习与梯度下降法、牛顿法及ADAM优化器相关的连续时间动力学。随后将所发现的梯度流作为原始优化问题的代理模型进行求解。为此,我们提出了学习梯度流优化器,该优化器能够在优化过程中按用户设定的间隔,在全维度或降维空间中构建可变多项式阶次的代理模型。我们在工程力学与科学机器学习领域的若干标准问题上验证了该方法的有效性,包括两个反问题、结构拓扑优化以及两种不同离散格式的正向求解案例。研究结果表明,通过学习梯度流捕获优化轨迹的关键特征并规避目标函数及其梯度的昂贵计算,能够显著加速收敛过程。