We explore the mathematical foundations of Recurrent Neural Networks (RNNs) and three fundamental procedures: temporal rescaling, discretization, and linearization. These techniques provide essential tools for characterizing RNN behaviour, enabling insights into temporal dynamics, practical computational implementation, and linear approximations for analysis. We discuss the flexible order of application of these procedures, emphasizing their significance in modelling and analyzing RNNs for computational neuroscience and machine learning applications. We explicitly describe here under what conditions these procedures can be interchangeable.
翻译:我们探讨了循环神经网络(RNN)的数学基础及其三个基本过程:时间重标度、离散化和线性化。这些技术为描述RNN行为提供了重要工具,能够洞察时间动态特性、实现实际计算操作,并建立用于分析的线性近似。我们讨论了这些过程应用的灵活顺序,强调了它们在计算神经科学和机器学习应用中建模和分析RNN的重要性。本文明确阐述了这些过程在何种条件下可以互换使用。