Next-token prediction models have predominantly relied on decoder-only Transformers with causal attention, driven by the common belief that causal attention is essential to prevent "cheating" by masking future tokens. We challenge this widely accepted notion and argue that this design choice is about efficiency rather than necessity. While decoder-only Transformers are still a good choice for practical reasons, they are not the only viable option. In this work, we introduce Encoder-only Next Token Prediction (ENTP). We explore the differences between ENTP and decoder-only Transformers in expressive power and complexity, highlighting potential advantages of ENTP. We introduce the Triplet-Counting task and show, both theoretically and experimentally, that while ENTP can perform this task easily, a decoder-only Transformer cannot. Finally, we empirically demonstrate ENTP's superior performance across various realistic tasks, such as length generalization and in-context learning.
翻译:下一词元预测模型主要依赖于具有因果注意力机制的仅解码器Transformer,这源于一种普遍观念,即因果注意力对于防止通过掩蔽未来词元进行"作弊"至关重要。我们挑战这一被广泛接受的观点,并论证这种设计选择更多是出于效率考虑而非必要性。尽管仅解码器Transformer基于实际原因仍是一个不错的选择,但它们并非唯一可行的选项。在本工作中,我们提出了仅编码器的下一词元预测(ENTP)。我们探讨了ENTP与仅解码器Transformer在表达能力和复杂性方面的差异,并强调了ENTP的潜在优势。我们引入了三重计数任务,并从理论和实验两方面证明,虽然ENTP可以轻松完成此任务,但仅解码器Transformer却无法做到。最后,我们通过实验验证了ENTP在多种实际任务中的优越性能,例如长度泛化和上下文学习。