Taiwanese Hakka is a low-resource, endangered language that poses significant challenges for automatic speech recognition (ASR), including high dialectal variability and the presence of two distinct writing systems (Hanzi and Pinyin). Traditional ASR models often encounter difficulties in this context, as they tend to conflate essential linguistic content with dialect-specific variations across both phonological and lexical dimensions. To address these challenges, we propose a unified framework grounded in the Recurrent Neural Network Transducers (RNN-T). Central to our approach is the introduction of dialect-aware modeling strategies designed to disentangle dialectal "style" from linguistic "content", which enhances the model's capacity to learn robust and generalized representations. Additionally, the framework employs parameter-efficient prediction networks to concurrently model ASR (Hanzi and Pinyin). We demonstrate that these tasks create a powerful synergy, wherein the cross-script objective serves as a mutual regularizer to improve the primary ASR tasks. Experiments conducted on the HAT corpus reveal that our model achieves 57.00% and 40.41% relative error rate reduction on Hanzi and Pinyin ASR, respectively. To our knowledge, this is the first systematic investigation into the impact of Hakka dialectal variations on ASR and the first single model capable of jointly addressing these tasks.
翻译:台湾客家语是一种低资源且濒危的语言,为自动语音识别(ASR)带来了重大挑战,包括较高的方言变异性以及存在两种不同的书写系统(汉字与拼音)。传统的ASR模型在此背景下常遇困难,因为它们倾向于将核心的语言内容与跨音系和词汇维度的方言特异性变异相混淆。为解决这些挑战,我们提出一个基于循环神经网络传感器(RNN-T)的统一框架。我们方法的核心在于引入方言感知建模策略,旨在将方言“风格”与语言“内容”解耦,从而增强模型学习鲁棒且泛化表征的能力。此外,该框架采用参数高效的预测网络来同时对汉字与拼音的ASR任务进行建模。我们证明这些任务产生了强大的协同效应,其中跨书写系统的目标充当了相互正则化器,以改进主要的ASR任务。在HAT语料库上进行的实验表明,我们的模型在汉字和拼音ASR上分别实现了57.00%和40.41%的相对错误率降低。据我们所知,这是首次系统性地研究客家语方言变异对ASR的影响,也是首个能够联合处理这些任务的单一模型。