In order to make the foundation model more efficient and effective, our idea is combining sequence transformation and state transformation. First, we prove the availability of rotary position embedding in the state space duality algorithm, which reduces the perplexity of the hybrid quadratic causal self-attention and state space duality by more than 4%, to ensure that the combining sequence transformation unifies position encoding. Second, we propose dynamic mask attention, which maintains 100% accuracy in the more challenging multi-query associative recall task, improving by more than 150% compared to quadratic causal self-attention and state space duality, to ensure that the combining sequence transformation selectively filters relevant information. Third, we design cross domain mixture of experts, which makes the computational speed of expert retrieval with more than 1024 experts 8 to 10 times faster than the mixture of experts, to ensure that the combining state transformation quickly retrieval mixture. Finally, we summarize these matrix algorithms that can form the foundation model: Wonderful Matrices, which can be a competitor to popular model architectures.
翻译:为使基础模型更高效且有效,我们的思路是融合序列变换与状态变换。首先,我们证明了旋转位置编码在状态空间对偶算法中的可用性,该算法将混合二次因果自注意力与状态空间对偶的困惑度降低超过4%,从而确保融合序列变换能统一位置编码。其次,我们提出动态掩码注意力机制,在更具挑战性的多查询关联召回任务中保持100%的准确率,相比二次因果自注意力与状态空间对偶提升超过150%,以此确保融合序列变换能选择性过滤相关信息。第三,我们设计了跨域专家混合机制,使专家数量超过1024时的专家检索计算速度比传统专家混合机制快8至10倍,从而确保融合状态变换能快速检索混合结果。最后,我们总结了这些可构成基础模型的矩阵算法:奇妙矩阵,其有望成为当前流行模型架构的有力竞争者。