We introduce a diagonalization-based optimization for Linear Echo State Networks (ESNs) that reduces the per-step computational complexity of reservoir state updates from O(N^2) to O(N). By reformulating reservoir dynamics in the eigenbasis of the recurrent matrix, the recurrent update becomes a set of independent element-wise operations, eliminating the matrix multiplication. We further propose three methods to use our optimization depending on the situation: (i) Eigenbasis Weight Transformation (EWT), which preserves the dynamics of standard and trained Linear ESNs, (ii) End-to-End Eigenbasis Training (EET), which directly optimizes readout weights in the transformed space and (iii) Direct Parameter Generation (DPG), that bypasses matrix diagonalization by directly sampling eigenvalues and eigenvectors, achieving comparable performance than standard Linear ESNs. Across all experiments, both our methods preserve predictive accuracy while offering significant computational speedups, making them a replacement of standard Linear ESNs computations and training, and suggesting a shift of paradigm in linear ESN towards the direct selection of eigenvalues.
翻译:本文提出了一种基于对角化的线性回声状态网络(ESN)优化方法,将储备池状态更新的每步计算复杂度从 O(N^2) 降至 O(N)。通过将储备池动态在循环矩阵的特征基中重新表述,循环更新转化为一组独立的逐元素操作,从而消除了矩阵乘法。我们进一步提出了三种根据具体情况使用该优化的方法:(i)特征基权重变换(EWT),该方法保留了标准及已训练线性ESN的动态特性;(ii)端到端特征基训练(EET),该方法直接在变换后的空间中优化读出权重;以及(iii)直接参数生成(DPG),该方法通过直接采样特征值和特征向量,绕过了矩阵对角化过程,取得了与标准线性ESN相当的性能。在所有实验中,我们的方法在保持预测精度的同时,均实现了显著的计算加速,使其能够替代标准线性ESN的计算与训练流程,并提示了线性ESN向直接选择特征值的范式转变。