The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at https://github.com/yuyijiong/sliding-window-attention-adaptation
翻译:基于Transformer的大型语言模型(LLMs)中的自注意力机制随输入长度呈二次方复杂度增长,导致长上下文推理成本高昂。滑动窗口注意力(SWA)将这一成本降低至线性复杂度,但对于使用完整注意力(FA)预训练的模型,若在推理时直接启用完全SWA,会因训练-推理不匹配而导致长上下文性能严重下降。这促使我们思考:FA预训练的LLMs能否在不重新预训练的情况下良好适应SWA?为此,我们提出滑动窗口注意力自适应(SWAA),这是一套结合了五种方法的实用方案,以实现更好的自适应:(1)仅在预填充阶段应用SWA;(2)保留“汇聚”令牌;(3)交错FA/SWA层;(4)思维链(CoT);(5)微调。实验表明,SWA自适应是可行但非平凡的:单一方法均不足够,而特定的协同组合能有效恢复原始长上下文性能。我们进一步分析了不同SWAA配置的性能-效率权衡,并为多样化场景提供了推荐方案。代码发布于https://github.com/yuyijiong/sliding-window-attention-adaptation。