The dominance of large decoder-only language models has overshadowed encoder-decoder architectures, despite their fundamental efficiency advantages in sequence processing. For small language models (SLMs) - those with 1 billion parameters or fewer - our systematic analysis across GPU, CPU, and NPU platforms reveals that encoder-decoder architectures achieve 47% lower first-token latency and 4.7x higher throughput compared to decoder-only models on edge devices. These gains may be attributed to encoder-decoder's one-time input processing and efficient separation of understanding and generation phases. We introduce a novel knowledge distillation framework that enables encoder-decoder models to leverage capabilities from large scalable decoder-only teachers while preserving their architectural advantages, achieving up to 6 average performance points improvement across diverse tasks, with significant gains in asymmetric sequence tasks where input and output distributions can benefit from different processing approaches. When combined with modern advances like Rotary Positional Embeddings (RoPE) and Vision encoders, our systematic investigation demonstrates that encoder-decoder architectures provide a more practical path toward deploying capable language models in resource-constrained environments. Our findings challenge the prevailing trend toward decoder-only scaling, showing that architectural choices become increasingly crucial as parameter budgets decrease, particularly for on-device and edge deployments where computational efficiency is paramount.
翻译:尽管编码器-解码器架构在序列处理方面具有根本性的效率优势,但大型仅解码器语言模型的盛行已使其黯然失色。针对参数量不超过10亿的小型语言模型(SLMs),我们在GPU、CPU和NPU平台上的系统分析表明,在边缘设备上,编码器-解码器架构相比仅解码器模型实现了47%的首词延迟降低和4.7倍的吞吐量提升。这些优势可归因于编码器-解码器架构的一次性输入处理以及理解与生成阶段的高效分离。我们提出了一种新颖的知识蒸馏框架,使编码器-解码器模型能够利用大型可扩展仅解码器教师模型的能力,同时保持其架构优势,在多样化任务中实现了高达6个平均性能点的提升,在非对称序列任务中收益尤为显著——这类任务的输入与输出分布可从不同的处理方法中获益。当结合旋转位置编码(RoPE)和视觉编码器等现代技术进展时,我们的系统研究表明,编码器-解码器架构为在资源受限环境中部署高效能语言模型提供了更实用的路径。我们的研究结果挑战了当前仅解码器规模扩展的主流趋势,表明随着参数预算的减少——特别是在计算效率至关重要的设备端与边缘部署场景中——架构选择正变得日益关键。