Solving massive-scale optimization problems requires scalable first-order methods with low per-iteration cost. This tutorial highlights a shift in optimization: using differentiable programming not only to execute algorithms but to learn how to design them. Modern frameworks such as PyTorch, TensorFlow, and JAX enable this paradigm through efficient automatic differentiation. Embedding first-order methods within these systems allows end-to-end training that improves convergence and solution quality. Guided by Fenchel-Rockafellar duality, the tutorial demonstrates how duality-informed iterative schemes such as ADMM and PDHG can be learned and adapted. Case studies across LP, OPF, Laplacian regularization, and neural network verification illustrate these gains.
翻译:求解大规模优化问题需要具有低单次迭代成本的可扩展一阶方法。本教程强调优化领域的范式转变:利用可微分编程不仅执行算法,更学习如何设计算法。现代框架(如PyTorch、TensorFlow和JAX)通过高效自动微分实现了这一范式。将一阶方法嵌入这些系统可实现端到端训练,从而提升收敛速度与解的质量。在Fenchel-Rockafellar对偶理论的指导下,本教程展示如何通过学习自适应调整ADMM、PDHG等对偶感知迭代方案。在线性规划、最优潮流计算、拉普拉斯正则化及神经网络验证等案例研究中,具体展示了该方法带来的性能提升。