Forecasting high-dimensional spatiotemporal systems remains computationally challenging for recurrent neural networks (RNNs) and long short-term memory (LSTM) models due to gradient-based training and memory bottlenecks. Reservoir Computing (RC) mitigates these challenges by replacing backpropagation with fixed recurrent layers and a convex readout optimization, yet conventional RC architectures still scale poorly with input dimensionality. We introduce a Sequential Reservoir Computing (Sequential RC) architecture that decomposes a large reservoir into a series of smaller, interconnected reservoirs. This design reduces memory and computational costs while preserving long-term temporal dependencies. Using both low-dimensional chaotic systems (Lorenz63) and high-dimensional physical simulations (2D vorticity and shallow-water equations), Sequential RC achieves 15-25% longer valid forecast horizons, 20-30% lower error metrics (SSIM, RMSE), and up to three orders of magnitude lower training cost compared to LSTM and standard RNN baselines. The results demonstrate that Sequential RC maintains the simplicity and efficiency of conventional RC while achieving superior scalability for high-dimensional dynamical systems. This approach provides a practical path toward real-time, energy-efficient forecasting in scientific and engineering applications.
翻译:高维时空系统的预测对于循环神经网络(RNN)和长短期记忆(LSTM)模型而言,由于基于梯度的训练和内存瓶颈问题,计算上仍具挑战性。储层计算(RC)通过使用固定的循环层和凸读出优化替代反向传播,缓解了这些挑战,但传统的RC架构在处理高维输入时仍存在扩展性不足的问题。本文提出一种序列储层计算(Sequential RC)架构,将大型储层分解为一系列相互连接的小型储层。该设计在保持长期时间依赖性的同时,降低了内存和计算成本。通过在低维混沌系统(Lorenz63)和高维物理模拟(二维涡度方程和浅水方程)上进行验证,Sequential RC相比LSTM和标准RNN基线,实现了15-25%的有效预测时域延长、20-30%的误差指标(SSIM、RMSE)降低,以及高达三个数量级的训练成本减少。结果表明,Sequential RC在保持传统RC的简洁性和高效性的同时,对高维动力系统具有更优的可扩展性。该方法为科学和工程应用中的实时、高能效预测提供了一条实用路径。