Recurrent Neural Networks (RNNs) have shown remarkable performances in system identification, particularly in nonlinear dynamical systems such as thermal processes. However, stability remains a critical challenge in practical applications: although the underlying process may be intrinsically stable, there may be no guarantee that the resulting RNN model captures this behavior. This paper addresses the stability issue by deriving a sufficient condition for Input-to-State Stability based on the infinity-norm (ISS$_{\infty}$) for Long Short-Term Memory (LSTM) networks. The obtained condition depends on fewer network parameters compared to prior works. A ISS$_{\infty}$-promoted training strategy is developed, incorporating a penalty term in the loss function that encourages stability and an ad hoc early stopping approach. The quality of LSTM models trained via the proposed approach is validated on a thermal system case study, where the ISS$_{\infty}$-promoted LSTM outperforms both a physics-based model and an ISS$_{\infty}$-promoted Gated Recurrent Unit (GRU) network while also surpassing non-ISS$_{\infty}$-promoted LSTM and GRU RNNs.
翻译:循环神经网络(RNNs)在系统辨识,特别是热过程等非线性动态系统中表现出卓越的性能。然而,稳定性在实际应用中仍是一个关键挑战:尽管底层过程本身可能是稳定的,但无法保证所得到的RNN模型能够捕捉到这种行为。本文通过推导长短期记忆(LSTM)网络基于无穷范数的输入-状态稳定性(ISS$_{\infty}$)的充分条件来解决稳定性问题。与先前工作相比,所得条件依赖于更少的网络参数。本文开发了一种ISS$_{\infty}$促进的训练策略,该策略在损失函数中加入鼓励稳定性的惩罚项,并采用一种特设的早停方法。通过所提方法训练的LSTM模型的质量在一个热系统案例研究中得到验证,其中ISS$_{\infty}$促进的LSTM在性能上优于基于物理的模型和ISS$_{\infty}$促进的门控循环单元(GRU)网络,同时也超越了非ISS$_{\infty}$促进的LSTM和GRU RNNs。