In this report, we introduce Qwen3-ASR family, which includes two powerful all-in-one speech recognition models and a novel non-autoregressive speech forced alignment model. Qwen3-ASR-1.7B and Qwen3-ASR-0.6B are ASR models that support language identification and ASR for 52 languages and dialects. Both of them leverage large-scale speech training data and the strong audio understanding ability of their foundation model Qwen3-Omni. We conduct comprehensive internal evaluation besides the open-sourced benchmarks as ASR models might differ little on open-sourced benchmark scores but exhibit significant quality differences in real-world scenarios. The experiments reveal that the 1.7B version achieves SOTA performance among open-sourced ASR models and is competitive with the strongest proprietary APIs while the 0.6B version offers the best accuracy-efficiency trade-off. Qwen3-ASR-0.6B can achieve an average TTFT as low as 92ms and transcribe 2000 seconds speech in 1 second at a concurrency of 128. Qwen3-ForcedAligner-0.6B is an LLM based NAR timestamp predictor that is able to align text-speech pairs in 11 languages. Timestamp accuracy experiments show that the proposed model outperforms the three strongest force alignment models and takes more advantages in efficiency and versatility. To further accelerate the community research of ASR and audio understanding, we release these models under the Apache 2.0 license.
翻译:本报告介绍了 Qwen3-ASR 系列模型,该系列包含两个强大的端到端语音识别模型和一个新颖的非自回归语音强制对齐模型。Qwen3-ASR-1.7B 和 Qwen3-ASR-0.6B 是支持 52 种语言和方言的语言识别与语音识别的 ASR 模型。两者均利用了大规模语音训练数据及其基础模型 Qwen3-Omni 强大的音频理解能力。鉴于 ASR 模型在开源基准测试上的得分可能差异不大,但在实际场景中却表现出显著的质量差异,我们除了在开源基准上进行测试外,还进行了全面的内部评估。实验表明,1.7B 版本在开源 ASR 模型中达到了 SOTA 性能,并与最强的商业 API 相竞争,而 0.6B 版本则提供了最佳的准确性与效率权衡。Qwen3-ASR-0.6B 的平均首次令牌时间可低至 92 毫秒,并在 128 并发下,能在 1 秒内转录 2000 秒的语音。Qwen3-ForcedAligner-0.6B 是一个基于大语言模型的非自回归时间戳预测器,能够为 11 种语言的文本-语音对进行对齐。时间戳准确性实验表明,所提出的模型性能优于三个最强的强制对齐模型,并在效率和通用性方面更具优势。为了进一步加速 ASR 和音频理解领域的社区研究,我们在 Apache 2.0 许可下开源了这些模型。