We introduce the concept of multiple temporal perspectives, a novel approach applicable to Recurrent Neural Network (RNN) architectures for enhancing their understanding of sequential data. This method involves maintaining diverse temporal views of previously encountered text, significantly enriching the language models' capacity to interpret context. To show the efficacy of this approach, we incorporate it into the Receptance Weighted Key Value (RWKV) architecture, addressing its inherent challenge of retaining all historical information within a single hidden state. Notably, this improvement is achieved with a minimal increase in the number of parameters --even as little as $0.04\%$ of the original number of parameters. Further, the additional parameters necessary for the multiple temporal perspectives are fine-tuned with minimal computational overhead, avoiding the need for a full pre-training. The resulting model maintains linear computational complexity during prompt inference, ensuring consistent efficiency across various sequence lengths. The empirical results and ablation studies included in our research validate the effectiveness of our approach, showcasing improved performance across multiple benchmarks. The code, model weights and datasets are open-sourced at: https://github.com/RazvanDu/TemporalRNNs.
翻译:本文提出了多时间视角的概念,这是一种适用于循环神经网络(RNN)架构的新方法,旨在增强其对序列数据的理解能力。该方法通过维护对已处理文本的多样化时间视角,显著提升了语言模型对上下文的理解能力。为验证该方法的有效性,我们将其整合到Receptance Weighted Key Value(RWKV)架构中,以解决其将全部历史信息存储在单一隐藏状态中的固有挑战。值得注意的是,这一改进仅需极少的参数增加——最低可达原始参数量的$0.04\%$。此外,实现多时间视角所需的额外参数可通过极小的计算开销进行微调,无需完整的预训练过程。所得模型在提示推理过程中保持线性计算复杂度,确保在不同序列长度下均具有稳定的效率。本文包含的实证结果与消融研究验证了该方法的有效性,其在多个基准测试中均表现出性能提升。代码、模型权重及数据集已开源,地址为:https://github.com/RazvanDu/TemporalRNNs。