We present a data-driven control framework for partial differential equations (PDEs). Our approach integrates Time-Integrated Deep Operator Networks (TI-DeepONets) as differentiable PDE surrogate models within the Differentiable Predictive Control (DPC)-a self-supervised learning framework for constrained neural control policies. The TI-DeepONet architecture learns temporal derivatives and couples them with numerical integrators, while the DPC algorithm uses automatic differentiation to compute policy gradients by backpropagating the expectations of the optimal control loss through the learned TI-DeepONet. This approach enables efficient offline optimization of neural policies without the need for online optimization or supervisory controllers. We empirically demonstrate the proposed method across diverse PDE systems, including the heat, the nonlinear Burgers', and the reaction-diffusion equations. The learned policies achieve target tracking, constraint satisfaction, and curvature minimization objectives, while generalizing across distributions of initial conditions and parameters. Moreover, we demonstrate four orders of magnitude acceleration at inference time compared to nonlinear model predictive control benchmarks. These results highlight the promise of operator learning for scalable model-based control of PDEs.
翻译:暂无翻译