Recent years have witnessed a resurgence in using ReLU neural networks (NNs) to represent model predictive control (MPC) policies. However, determining the required network complexity to ensure closed-loop performance remains a fundamental open problem. This involves a critical precision-complexity trade-off: undersized networks may fail to capture the MPC policy, while oversized ones may outweigh the benefits of ReLU network approximation. In this work, we propose a projection-based method to enforce hard constraints and establish a state-dependent Lipschitz continuity property for the optimal MPC cost function, which enables sharp convergence analysis of the closed-loop system. For the first time, we derive explicit bounds on ReLU network width and depth for approximating MPC policies with guaranteed closed-loop performance. To further reduce network complexity and enhance closed-loop performance, we propose a non-uniform error framework with a state-aware scaling function to adaptively adjust both the input and output of the ReLU network. Our contributions provide a foundational step toward certifiable ReLU NN-based MPC.
翻译:近年来,利用ReLU神经网络表示模型预测控制策略的研究重新兴起。然而,如何确定确保闭环性能所需的网络复杂度仍是一个根本性的开放问题。这涉及一个关键的精度-复杂度权衡:网络规模过小可能无法准确表达MPC策略,而规模过大则可能抵消ReLU网络近似的优势。本文提出一种基于投影的方法来强制执行硬约束,并为最优MPC代价函数建立了状态依赖的Lipschitz连续性性质,从而实现对闭环系统的精确收敛性分析。我们首次推导出在保证闭环性能前提下逼近MPC策略所需的ReLU网络宽度与深度的显式边界。为进一步降低网络复杂度并提升闭环性能,我们提出一种非均匀误差框架,通过状态感知缩放函数自适应调整ReLU网络的输入与输出。本研究的贡献为构建可验证的基于ReLU神经网络的MPC系统奠定了理论基础。