Automated speech recognition (ASR) models have gained prominence for applications such as captioning, speech translation, and live transcription. This paper studies Whisper and two model variants: one optimized for live speech streaming and another for offline transcription. Notably, these models have been found to generate hallucinated content, reducing transcription reliability. Furthermore, larger model variants exhibit increased latency and pose challenges for deployment on resource-constrained devices. This study analyzes the similarities and differences between three Whisper models, qualitatively examining their distinct capabilities. Next, this study quantifies the impact of model quantization on latency and evaluates its viability for edge deployment. Using the open source LibriSpeech dataset, this paper evaluates the word error rate (WER) along with latency analysis of whispercpp using 3 quantization methods (INT4, INT5, INT8). Results show that quantization reduces latency by 19\% and model size by 45\%, while preserving transcription accuracy. These findings provide insights into the optimal use cases of different Whisper models and edge device deployment possibilities. All code, datasets, and implementation details are available in a public GitHub repository: https://github.com/allisonandreyev/WhisperQuantization.git
翻译:自动语音识别(ASR)模型在字幕生成、语音翻译和实时转写等应用中日益重要。本文研究了Whisper及其两种变体模型:一种针对实时语音流优化,另一种针对离线转录优化。值得注意的是,这些模型被发现会产生幻觉内容,降低了转录的可靠性。此外,较大规模的模型变体会增加延迟,对资源受限设备的部署构成挑战。本研究分析了三种Whisper模型之间的异同,定性考察了它们各自的能力特点。随后,本研究量化了模型量化对延迟的影响,并评估了其在边缘设备部署的可行性。基于开源的LibriSpeech数据集,本文评估了单词错误率(WER),并采用三种量化方法(INT4、INT5、INT8)对whispercpp进行了延迟分析。结果表明,量化处理在保持转录准确性的同时,将延迟降低19%,模型大小减少45%。这些发现为不同Whisper模型的最佳使用场景及边缘设备部署可能性提供了重要参考。所有代码、数据集及实现细节均已公开于GitHub仓库:https://github.com/allisonandreyev/WhisperQuantization.git