A major challenge for transformers is generalizing to sequences longer than those observed during training. While previous works have empirically shown that transformers can either succeed or fail at length generalization depending on the task, theoretical understanding of this phenomenon remains limited. In this work, we introduce a rigorous theoretical framework to analyze length generalization in causal transformers with learnable absolute positional encodings. In particular, we characterize those functions that are identifiable in the limit from sufficiently long inputs with absolute positional encodings under an idealized inference scheme using a norm-based regularizer. This enables us to prove the possibility of length generalization for a rich family of problems. We experimentally validate the theory as a predictor of success and failure of length generalization across a range of algorithmic and formal language tasks. Our theory not only explains a broad set of empirical observations but also opens the way to provably predicting length generalization capabilities in transformers.
翻译:Transformer面临的一个主要挑战是泛化到比训练时更长的序列。尽管先前的研究通过实验表明,Transformer在长度泛化上的成功与否取决于具体任务,但对该现象的理论理解仍然有限。本文引入了一个严格的理论框架,用于分析具有可学习绝对位置编码的因果Transformer的长度泛化能力。具体而言,我们刻画了在理想化推理方案下,使用基于范数的正则化器时,可通过足够长的输入及绝对位置编码在极限意义上识别的函数类别。这使得我们能够证明一大类问题实现长度泛化的可能性。我们通过一系列算法和形式语言任务的实验验证了该理论对长度泛化成功与失败的预测能力。我们的理论不仅解释了广泛的实证观察结果,还为可证明地预测Transformer的长度泛化能力开辟了道路。