In order to make the foundation model more efficient and effective, our idea is combining sequence transformation and state transformation. First, we prove the availability of rotary position embedding in the state space duality algorithm, which reduces the perplexity of the hybrid quadratic causal self-attention and state space duality by more than 4%, to ensure that the combining sequence transformation unifies position encoding. Second, we propose dynamic mask attention, which maintains 100% accuracy in the more challenging multi-query associative recall task, improving by more than 150% compared to quadratic causal self-attention and state space duality, to ensure that the combining sequence transformation selectively filters relevant information. Third, we design cross domain mixture of experts, which makes the computational speed of expert retrieval with more than 1024 experts 8 to 10 times faster than the mixture of experts, to ensure that the combining state transformation quickly retrieval mixture. Finally, we summarize these matrix algorithms that can form the foundation model: Wonderful Matrices, which can be a competitor to popular model architectures.
翻译:为使基础模型更加高效有效,我们的核心思路在于融合序列变换与状态变换。首先,我们证明了旋转位置编码在状态空间对偶算法中的适用性,该技术将混合二次因果自注意力与状态空间对偶的困惑度降低超过4%,从而确保融合后的序列变换能够统一位置编码。其次,我们提出了动态掩码注意力机制,该机制在更具挑战性的多查询关联召回任务中保持100%准确率,相比二次因果自注意力与状态空间对偶提升超过150%,以此确保融合序列变换能够选择性过滤相关信息。第三,我们设计了跨域专家混合架构,使得专家数量超过1024时的专家检索计算速度比传统专家混合方法快8至10倍,从而确保融合状态变换能够快速检索混合信息。最后,我们将这些能够构成基础模型的矩阵算法总结为:奇妙矩阵,该架构有望成为当前主流模型架构的有力竞争者。