Latency-critical speech applications (e.g., live transcription, voice commands, and real-time translation) demand low time-to-first-token (TTFT) and high transcription accuracy, particularly on resource-constrained edge devices. Full-attention Transformer encoders remain a strong accuracy baseline for automatic speech recognition (ASR) because every frame can directly attend to every other frame, which resolves otherwise locally ambiguous acoustics using distant lexical context. However, this global dependency incurs quadratic complexity in sequence length, inducing an inherent "encode-the-whole-utterance" latency profile. For streaming use cases, this causes TTFT to grow linearly with utterance length as the encoder must process the entire prefix before any decoder token can be emitted. To better meet the needs of on-device, streaming ASR use cases we introduce Moonshine v2, an ergodic streaming-encoder ASR model that employs sliding-window self-attention to achieve bounded, low-latency inference while preserving strong local context. Our models achieve state of the art word error rates across standard benchmarks, attaining accuracy on-par with models 6x their size while running significantly faster. These results demonstrate that carefully designed local attention is competitive with the accuracy of full attention at a fraction of the size and latency cost, opening new possibilities for interactive speech interfaces on edge devices.
翻译:延迟关键型语音应用(如实时转录、语音命令和实时翻译)要求较低的首词延迟(TTFT)与较高的转录准确率,尤其在资源受限的边缘设备上。全注意力Transformer编码器因其每一帧均可直接关注其他所有帧的特性,能够利用远距离词汇上下文解析局部模糊的声学信息,从而在自动语音识别(ASR)领域始终保持优异的准确率基准。然而,这种全局依赖关系导致计算复杂度随序列长度呈二次方增长,形成固有的“需编码完整话语”的延迟特性。在流式应用场景中,这会导致首词延迟随话语长度线性增加,因为编码器必须在生成任何解码器词元前处理完整的前缀内容。为更好地满足设备端流式ASR应用的需求,我们提出Moonshine v2——一种采用滑动窗口自注意力机制的遍历流式编码器ASR模型,在保持强局部上下文建模能力的同时实现有界的低延迟推理。我们的模型在标准基准测试中取得了最先进的词错误率,其准确率可与体积大6倍的模型相媲美,且运行速度显著更快。这些结果表明,精心设计的局部注意力机制能以极小的体积与延迟代价,达到与全注意力机制相竞争的准确率,为边缘设备上交互式语音界面的发展开辟了新路径。