The recently introduced DeepONet operator-learning framework for PDE control is extended from the results for basic hyperbolic and parabolic PDEs to an advanced hyperbolic class that involves delays on both the state and the system output or input. The PDE backstepping design produces gain functions that are outputs of a nonlinear operator, mapping functions on a spatial domain into functions on a spatial domain, and where this gain-generating operator's inputs are the PDE's coefficients. The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight. Once we produce this approximation-theoretic result in infinite dimension, with it we establish stability in closed loop under feedback that employs approximate gains. In addition to supplying such results under full-state feedback, we also develop DeepONet-approximated observers and output-feedback laws and prove their own stabilizing properties under neural operator approximations. With numerical simulations we illustrate the theoretical results and quantify the numerical effort savings, which are of two orders of magnitude, thanks to replacing the numerical PDE solving with the DeepONet.
翻译:近期提出的用于PDE控制的DeepONet算子学习框架,已从基本双曲型和抛物型PDE的结果扩展到一类更高级的双曲型系统,该系统同时涉及状态及系统输出或输入上的延迟。PDE反步设计产生的增益函数是一个非线性算子的输出,该算子将空间域上的函数映射为空间域上的函数,且该增益生成算子的输入为PDE的系数。该算子通过DeepONet神经网络进行逼近,其精度在理论上可证明是任意紧致的。在无限维空间中建立这一逼近理论结果后,我们利用它证明了在采用近似增益的反馈下闭环系统的稳定性。除了在全状态反馈下提供此类结果外,我们还开发了DeepONet逼近观测器和输出反馈律,并证明了它们在神经算子逼近下自身的镇定特性。通过数值模拟,我们阐释了理论结果并量化了计算效率的提升——由于用DeepONet替代数值PDE求解,计算量降低了两个数量级。