In this study, we provide constructive proof that Transformers can recognize and generate hierarchical language efficiently with respect to model size, even without the need for a specific positional encoding. Specifically, we show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures. We demonstrate that Transformers without positional encoding can generate hierarchical languages. Furthermore, we suggest that explicit positional encoding might have a detrimental effect on generalization with respect to sequence length.
翻译:本研究通过构造性证明表明,即使无需特定的位置编码机制,Transformer模型仍能高效地识别与生成层次化语言,且模型规模具有高效性。具体而言,我们证明了因果掩码机制与起始标记的结合能使Transformer自动计算层次结构中的位置信息与深度。我们进一步论证了无需位置编码的Transformer确实具备生成层次化语言的能力。此外,我们发现显式位置编码可能对模型在序列长度维度上的泛化能力产生负面影响。