Reservoir Computing (RC) has become popular in recent years due to its fast and efficient computational capabilities. Standard RC has been shown to be equivalent in the asymptotic limit to Recurrent Kernels, which helps in analyzing its expressive power. However, many well-established RC paradigms, such as Leaky RC, Sparse RC, and Deep RC, are yet to be analyzed in such a way. This study aims to fill this gap by providing an empirical analysis of the equivalence of specific RC architectures with their corresponding Recurrent Kernel formulation. We conduct a convergence study by varying the activation function implemented in each architecture. Our study also sheds light on the role of sparse connections in RC architectures and propose an optimal sparsity level that depends on the reservoir size. Furthermore, our systematic analysis shows that in Deep RC models, convergence is better achieved with successive reservoirs of decreasing sizes.
翻译:储备池计算(RC)因其快速高效的计算能力近年来受到广泛关注。标准RC已被证明在渐近极限下等价于循环核,这有助于分析其表达能力。然而,许多成熟的RC范式,如泄漏RC、稀疏RC和深度RC,尚未以这种方式进行分析。本研究旨在通过实证分析特定RC架构与其对应循环核公式的等价性来填补这一空白。我们通过改变各架构中实现的激活函数进行收敛性研究。我们的研究还揭示了稀疏连接在RC架构中的作用,并提出了一个取决于储备池大小的最优稀疏度水平。此外,我们的系统分析表明,在深度RC模型中,通过逐层减小的储备池尺寸能更好地实现收敛。