Transformers require positional encodings to represent sequence order, yet most prior work focuses on designing new positional encodings rather than examining how positional information is fused with token embeddings. In this paper, we study whether the fusion mechanism itself affects performance, particularly in long-sequence settings. We conduct a controlled empirical study comparing three canonical fusion strategies--element-wise addition, concatenation with projection, and scalar gated fusion--under identical Transformer architectures, data splits, and random seeds. Experiments on three text classification datasets spanning short (AG News), medium (IMDB), and long (ArXiv) sequences show that fusion choice has negligible impact on short texts but produces consistent gains on long documents. To verify that these gains are structural rather than stochastic, we perform paired-seed analysis and cross-dataset comparison across sequence-length regimes. Additional experiments on the ArXiv dataset indicate that the benefit of learnable fusion generalizes across multiple positional encoding families. Finally, we explore a lightweight convolutional gating mechanism that introduces local inductive bias at the fusion level, evaluated on long documents only. Our results indicate that positional-encoding fusion is a non-trivial design choice for long-sequence Transformers and should be treated as an explicit modeling decision rather than a fixed default.
翻译:Transformer需要位置编码来表示序列顺序,然而大多数先前工作专注于设计新的位置编码,而非研究位置信息如何与词元嵌入融合。本文探讨融合机制本身是否影响性能,特别是在长序列场景下。我们在相同的Transformer架构、数据划分和随机种子条件下,对三种经典融合策略——逐元素加法、带投影的拼接和标量门控融合——进行了对照实证研究。在涵盖短序列(AG News)、中序列(IMDB)和长序列(ArXiv)的三个文本分类数据集上的实验表明,融合选择对短文本影响可忽略,但在长文档上能产生一致的性能提升。为验证这些提升源于结构而非随机因素,我们进行了配对种子分析和跨序列长度区间的跨数据集比较。在ArXiv数据集上的补充实验表明,可学习融合的益处可推广至多种位置编码族。最后,我们探索了一种轻量级卷积门控机制,该机制在融合层面引入局部归纳偏置,并仅在长文档上进行评估。我们的结果表明,位置编码融合是长序列Transformer中不可忽视的设计选择,应被视为显式建模决策而非固定默认选项。