BERT-based models have shown a remarkable ability in the Chinese Spelling Check (CSC) task recently. However, traditional BERT-based methods still suffer from two limitations. First, although previous works have identified that explicit prior knowledge like Part-Of-Speech (POS) tagging can benefit in the CSC task, they neglected the fact that spelling errors inherent in CSC data can lead to incorrect tags and therefore mislead models. Additionally, they ignored the correlation between the implicit hierarchical information encoded by BERT's intermediate layers and different linguistic phenomena. This results in sub-optimal accuracy. To alleviate the above two issues, we design a heterogeneous knowledge-infused framework to strengthen BERT-based CSC models. To incorporate explicit POS knowledge, we utilize an auxiliary task strategy driven by Gaussian mixture model. Meanwhile, to incorporate implicit hierarchical linguistic knowledge within the encoder, we propose a novel form of n-gram-based layerwise self-attention to generate a multilayer representation. Experimental results show that our proposed framework yields a stable performance boost over four strong baseline models and outperforms the previous state-of-the-art methods on two datasets.
翻译:基于BERT的模型最近在中文拼写检查(CSC)任务中展现出卓越能力。然而,传统基于BERT的方法仍存在两个局限:首先,尽管已有研究指出显式先验知识(如词性标注)可提升CSC任务效果,但这些工作忽略了CSC数据中固有的拼写错误会导致错误标注,进而误导模型。其次,现有方法忽视了BERT中间层编码的隐含层级信息与不同语言现象之间的关联性,导致准确性未达最优。为解决上述问题,我们设计了一种异构知识融合框架以增强基于BERT的CSC模型。为融入显式词性知识,我们采用基于高斯混合模型的辅助任务策略;同时,为编码器引入隐含层级语言知识,我们提出新型n元语法层级自注意力机制以生成多层表示。实验结果表明,所提框架在四个强基线模型上取得稳定性能提升,并在两个数据集上超越现有最优方法。