Bengali (Bangla) remains under-resourced in long-form speech technology despite its wide use. We present Bengali-Loop, two community benchmarks to address this gap: (1) a long-form ASR corpus of 191 recordings (158.6 hours, 792k words) from 11 YouTube channels, collected via a reproducible subtitle-extraction pipeline and human-in-the-loop transcript verification; and (2) a speaker diarization corpus of 24 recordings (22 hours, 5,744 annotated segments) with fully manual speaker-turn labels in CSV format. Both benchmarks target realistic multi-speaker, long-duration content (e.g., Bangla drama/natok). We establish baselines (Tugstugi: 34.07% WER; pyannote.audio: 40.08% DER) and provide standardized evaluation protocols (WER/CER, DER), annotation rules, and data formats to support reproducible benchmarking and future model development for Bangla long-form ASR and diarization.
翻译:尽管孟加拉语(Bangla)使用广泛,但在长格式语音技术领域仍处于资源匮乏状态。我们提出了Bengali-Loop,两个旨在填补这一空白的社区基准:(1) 一个长格式自动语音识别(ASR)语料库,包含来自11个YouTube频道的191段录音(158.6小时,79.2万词),通过可复现的字幕提取流程和人在环路的转录验证收集而成;(2) 一个说话人日志语料库,包含24段录音(22小时,5,744个标注片段),配有完全手动标注的CSV格式说话人转换标签。两个基准均针对现实的多说话人、长时内容(例如孟加拉语戏剧/纳托克)。我们建立了基线(Tugstugi:34.07% WER;pyannote.audio:40.08% DER),并提供了标准化的评估协议(词错误率/字错误率,说话人日志错误率)、标注规则和数据格式,以支持可复现的基准测试以及未来针对孟加拉语长格式ASR和说话人日志的模型开发。