This paper presents a new long-form release of the Swiss Parliaments Corpus, converting entire multi-hour Swiss German debate sessions (each aligned with the official session protocols) into high-quality speech-text pairs. Our pipeline starts by transcribing all session audio into Standard German using Whisper Large-v3 under high-compute settings. We then apply a two-step GPT-4o correction process: first, GPT-4o ingests the raw Whisper output alongside the official protocols to refine misrecognitions, mainly named entities. Second, a separate GPT-4o pass evaluates each refined segment for semantic completeness. We filter out any segments whose Predicted BLEU score (derived from Whisper's average token log-probability) and GPT-4o evaluation score fall below a certain threshold. The final corpus contains 801 hours of audio, of which 555 hours pass our quality control. Compared to the original sentence-level SPC release, our long-form dataset achieves a 6-point BLEU improvement, demonstrating the power of combining robust ASR, LLM-based correction, and data-driven filtering for low-resource, domain-specific speech corpora.
翻译:本文提出了瑞士议会语料库的一个新长篇章版本,将完整的数小时瑞士德语辩论会议录音(每条均与官方会议记录对齐)转化为高质量的语音-文本对。我们的处理流程首先在高计算配置下使用Whisper Large-v3将所有会议音频转录为标准德语。随后应用两阶段GPT-4o校正流程:首先,GPT-4o接收原始Whisper输出与官方记录,以修正识别错误(主要为命名实体);其次,独立的GPT-4o处理环节评估每个精修片段的语义完整性。我们过滤掉预测BLEU分数(源自Whisper的平均词元对数概率)与GPT-4o评估分数低于特定阈值的所有片段。最终语料库包含801小时音频,其中555小时通过质量控制。与原始句子级SPC版本相比,我们的长篇章数据集实现了6个百分点的BLEU提升,这证明了将鲁棒自动语音识别、基于大语言模型的校正与数据驱动过滤相结合,对于低资源、领域特定语音语料库构建的有效性。