The dominance of large decoder-only language models has overshadowed encoder-decoder architectures, despite their fundamental efficiency advantages in sequence processing. For small language models (SLMs) - those with 1 billion parameters or fewer - our systematic analysis across GPU, CPU, and NPU platforms reveals that encoder-decoder architectures achieve 47% lower first-token latency and 4.7x higher throughput compared to decoder-only models on edge devices. These gains may be attributed to encoder-decoder's one-time input processing and efficient separation of understanding and generation phases. We introduce a novel knowledge distillation framework that enables encoder-decoder models to leverage capabilities from large scalable decoder-only teachers while preserving their architectural advantages, achieving up to 6 average performance points improvement across diverse tasks, with significant gains in asymmetric sequence tasks where input and output distributions can benefit from different processing approaches. When combined with modern advances like Rotary Positional Embeddings (RoPE) and Vision encoders, our systematic investigation demonstrates that encoder-decoder architectures provide a more practical path toward deploying capable language models in resource-constrained environments. Our findings challenge the prevailing trend toward decoder-only scaling, showing that architectural choices become increasingly crucial as parameter budgets decrease, particularly for on-device and edge deployments where computational efficiency is paramount.
翻译:尽管编码器-解码器架构在序列处理方面具有根本性的效率优势,但大型仅解码器语言模型的主导地位使其长期被忽视。针对参数规模在10亿及以下的小型语言模型(SLMs),我们在GPU、CPU和NPU平台上的系统分析表明:在边缘设备上,编码器-解码器架构相较于仅解码器模型实现了首词延迟降低47%,吞吐量提升4.7倍。这些优势可归因于编码器-解码器架构的单次输入处理机制以及对理解与生成阶段的高效分离。我们提出了一种新颖的知识蒸馏框架,使编码器-解码器模型能够利用大型可扩展仅解码器教师模型的能力,同时保持其架构优势——在多样化任务中平均性能提升达6个百分点,在输入输出分布可从不同处理方式中获益的非对称序列任务中表现尤为显著。当结合旋转位置编码(RoPE)和视觉编码器等现代技术进展时,我们的系统研究表明:编码器-解码器架构为在资源受限环境中部署高效能语言模型提供了更实用的路径。这些发现挑战了当前仅解码器规模扩展的主流趋势,揭示了随着参数预算的减少(特别是在计算效率至关重要的端侧与边缘部署场景中),架构选择的重要性正日益凸显。