Automatic speech recognition (ASR) for conversational speech remains challenging due to the limited availability of large-scale, well-annotated multi-speaker dialogue data and the complex temporal dynamics of natural interactions. Speaker-aware simulated conversations (SASC) offer an effective data augmentation strategy by transforming single-speaker recordings into realistic multi-speaker dialogues. However, prior work has primarily focused on English data, leaving questions about the applicability to lower-resource languages. In this paper, we adapt and implement the SASC framework for Hungarian conversational ASR. We further propose C-SASC, an extended variant that incorporates pause modeling conditioned on utterance duration, enabling a more faithful representation of local temporal dependencies observed in human conversation while retaining the simplicity and efficiency of the original approach. We generate synthetic Hungarian dialogues from the BEA-Large corpus and combine them with real conversational data for ASR training. Both SASC and C-SASC are evaluated extensively under a wide range of simulation configurations, using conversational statistics derived from CallHome, BEA-Dialogue, and GRASS corpora. Experimental results show that speaker-aware conversational simulation consistently improves recognition performance over naive concatenation-based augmentation. While the additional duration conditioning in C-SASC yields modest but systematic gains--most notably in character-level error rates--its effectiveness depends on the match between source conversational statistics and the target domain. Overall, our findings confirm the robustness of speaker-aware conversational simulation for Hungarian ASR and highlight the benefits and limitations of increasingly detailed temporal modeling in synthetic dialogue generation.
翻译:对话语音的自动语音识别(ASR)仍面临挑战,主要原因在于大规模、高质量标注的多说话人对话数据稀缺,以及自然交互中复杂的时间动态特性。说话人感知模拟对话通过将单人语音录音转换为真实的多说话人对话,提供了一种有效的数据增强策略。然而,现有研究主要集中于英语数据,其在低资源语言中的适用性尚不明确。本文针对匈牙利语对话ASR任务,对SASC框架进行了适配与实现。我们进一步提出了C-SASC——一种扩展变体,该模型通过结合基于语句时长的停顿建模,能够更准确地呈现人类对话中观察到的局部时间依赖关系,同时保持了原始方法的简洁性与高效性。我们基于BEA-Large语料库生成匈牙利语合成对话数据,并将其与真实对话数据结合用于ASR训练。通过使用源自CallHome、BEA-Dialogue和GRASS语料库的对话统计数据,我们在多种模拟配置下对SASC和C-SASC进行了全面评估。实验结果表明,相较于基于简单拼接的数据增强方法,说话人感知对话模拟能持续提升识别性能。虽然C-SASC中增加的时长条件建模带来了有限但系统性的性能提升(尤其在字符错误率指标上),但其有效性取决于源对话统计数据与目标领域的匹配程度。总体而言,我们的研究证实了说话人感知对话模拟在匈牙利语ASR中的鲁棒性,并揭示了合成对话生成中逐步精细化时间建模的优势与局限性。