Large language models (LLMs), although having revolutionized many fields, still suffer from the challenging extrapolation problem, where the inference ability of LLMs sharply declines beyond their max training lengths. In this work, we conduct a theoretical analysis to better understand why No Position Encoding (NoPE) fails outside its effective range, as well as examining the power of Position Encoding (PE) in this context. Our findings reveal that with meticulous weave position, PE can indeed be extended beyond effective range. Our theorems establish that LLMs equipped with weave PE can achieve improved extrapolation performance without additional cost. Furthermore, we introduce a novel weave PE method, Mesa-Extrapolation, which utilizes a chunk-based triangular attention matrix and applies Stair PE to manage the final chunk. This method not only retains competitive performance but also offers substantial benefits such as significantly reduced memory demand and faster inference speed. Extensive experiments validate the effectiveness of Mesa-Extrapolation, demonstrating its potential as a scalable solution to enhancing LLMs applicative reach. Our code is available at \url{https://github.com/soacker/Mesa-Extrapolation}.
翻译:大语言模型(LLMs)虽然在许多领域引发了革命性变革,但仍面临具有挑战性的外推问题,即当序列长度超出其最大训练长度时,LLMs的推理能力会急剧下降。在本工作中,我们进行了理论分析,以更好地理解为何无位置编码(NoPE)在其有效范围之外会失效,并在此背景下检验位置编码(PE)的作用。我们的研究结果表明,通过精心的编织位置设计,PE确实可以扩展到有效范围之外。我们的定理证明,配备编织PE的LLMs能够在不增加额外成本的情况下实现更好的外推性能。此外,我们提出了一种新颖的编织PE方法——Mesa-Extrapolation,该方法利用基于分块的三角注意力矩阵,并应用阶梯位置编码(Stair PE)来管理最终分块。这种方法不仅保持了有竞争力的性能,还带来了显著的优势,例如大幅降低内存需求和更快的推理速度。大量实验验证了Mesa-Extrapolation的有效性,证明了其作为增强LLMs应用范围的、可扩展解决方案的潜力。我们的代码可在 \url{https://github.com/soacker/Mesa-Extrapolation} 获取。