The State-Dependent Riccati Equation (SDRE) approach is extensively utilized in nonlinear optimal control as a reliable framework for designing robust feedback control strategies. This work provides an analysis of the SDRE approach, examining its theoretical foundations, error bounds, and numerical approximation techniques. We explore the relationship between SDRE and the Hamilton-Jacobi-Bellman (HJB) equation, deriving residual-based error estimates to quantify its suboptimality. Additionally, we introduce an optimal semilinear decomposition strategy to minimize the residual. From a computational perspective, we analyze two numerical methods for solving the SDRE: the offline-online approach and the Newton-Kleinman iterative method. Their performance is assessed through a numerical experiment involving the control of a nonlinear reaction-diffusion PDE. Results highlight the trade-offs between computational efficiency and accuracy, demonstrating the superiority of the Newton-Kleinman approach in achieving stable and cost-effective solutions.
翻译:状态依赖Riccati方程(SDRE)方法作为设计鲁棒反馈控制策略的可靠框架,在非线性最优控制领域得到了广泛应用。本文对SDRE方法进行了系统性分析,考察其理论基础、误差界及数值逼近技术。我们探讨了SDRE与Hamilton-Jacobi-Bellman(HJB)方程之间的理论关联,推导了基于残差的误差估计以量化其次优性。此外,我们提出了一种最优半线性分解策略以最小化残差。从计算角度,我们分析了求解SDRE的两种数值方法:离线-在线分解法与Newton-Kleinman迭代法。通过控制非线性反应-扩散偏微分方程的数值实验评估了两种方法的性能。结果揭示了计算效率与精度之间的权衡关系,证明了Newton-Kleinman方法在获得稳定且计算经济的解方面具有优越性。