Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling. These models are provably more expressive than their first-order counterparts and have connections to well-studied models from formal language theory. However, their large parameter tensor makes computations intractable. To circumvent this issue, one approach known as MIRNN consists in limiting the type of interactions used by the model. Another is to leverage tensor decomposition to diminish the parameter count. In this work, we study the model resulting from parameterizing 2RNNs using the CP decomposition, which we call CPRNN. Intuitively, the rank of the decomposition should reduce expressivity. We analyze how rank and hidden size affect model capacity and show the relationships between RNNs, 2RNNs, MIRNNs, and CPRNNs based on these parameters. We support these results empirically with experiments on the Penn Treebank dataset which demonstrate that, with a fixed parameter budget, CPRNNs outperforms RNNs, 2RNNs, and MIRNNs with the right choice of rank and hidden size.
翻译:二阶循环神经网络(2RNNs)通过利用二阶交互进行序列建模,从而扩展了传统循环神经网络。这些模型在表达能力上被证明优于一阶模型,并与形式语言理论中已深入研究的模型存在理论关联。然而,其庞大的参数张量导致计算难以处理。为解决此问题,一种称为MIRNN的方法通过限制模型使用的交互类型来实现简化;另一种方法则利用张量分解来减少参数数量。本研究通过CP分解对2RNNs进行参数化重构,将所得模型命名为CPRNN,并系统探讨其特性。直观而言,分解秩的降低可能削弱模型表达能力。我们通过理论分析揭示了分解秩与隐藏层维度对模型容量的影响机制,并基于这些参数阐明了RNNs、2RNNs、MIRNNs与CPRNNs之间的内在关联。在宾州树库数据集上的实证研究表明:在固定参数预算条件下,通过合理选择分解秩与隐藏层维度,CPRNN在性能表现上显著优于RNNs、2RNNs及MIRNNs。