This paper presents a new approach to fine-tuning OpenAI's Whisper model for low-resource languages by introducing a novel data generation method that converts sentence-level data into a long-form corpus, using Swiss German as a case study. Non-sentence-level data, which could improve the performance of long-form audio, is difficult to obtain and often restricted by copyright laws. Our method bridges this gap by transforming more accessible sentence-level data into a format that preserves the model's ability to handle long-form audio and perform segmentation without requiring non-sentence-level data. Our data generation process improves performance in several real-world applications and leads to the development of a new state-of-the-art speech-to-text (STT) model for Swiss German. We compare our model with a non-fine-tuned Whisper and our previous state-of-the-art Swiss German STT models, where our new model achieves higher BLEU scores. Our results also indicate that the proposed method is adaptable to other low-resource languages, supported by written guidance and code that allows the creation of fine-tuned Whisper models, which keep segmentation capabilities and allow the transcription of longer audio files using only sentence-level data with high quality.
翻译:本文提出了一种微调OpenAI Whisper模型以适应低资源语言的新方法,该方法通过引入一种创新的数据生成技术,将句子级数据转换为长文本语料库,并以瑞士德语作为案例进行研究。能够提升长音频处理性能的非句子级数据通常难以获取,且常受版权法规限制。我们的方法通过将更易获取的句子级数据转化为特定格式,弥合了这一缺口,该格式既能保持模型处理长音频及执行分段的能力,又无需依赖非句子级数据。我们的数据生成流程提升了多个实际应用场景中的性能表现,并催生了一款针对瑞士德语的新型最先进语音转文本(STT)模型。我们将该模型与未经微调的Whisper模型及我们先前研发的瑞士德语STT最优模型进行对比,结果显示新模型获得了更高的BLEU分数。实验结果表明,所提方法可适配其他低资源语言,我们同时提供了书面指南与代码,支持仅使用高质量句子级数据即可构建具备分段能力、可实现长音频转录的微调Whisper模型。