Writing, as an omnipresent form of human communication, permeates nearly every aspect of contemporary life. Consequently, inaccuracies or errors in written communication can lead to profound consequences, ranging from financial losses to potentially life-threatening situations. Spelling mistakes, among the most prevalent writing errors, are frequently encountered due to various factors. This research aims to identify and rectify diverse spelling errors in text using neural networks, specifically leveraging the Bidirectional Encoder Representations from Transformers (BERT) masked language model. To achieve this goal, we compiled a comprehensive dataset encompassing both non-real-word and real-word errors after categorizing different types of spelling mistakes. Subsequently, multiple pre-trained BERT models were employed. To ensure optimal performance in correcting misspelling errors, we propose a combined approach utilizing the BERT masked language model and Levenshtein distance. The results from our evaluation data demonstrate that the system presented herein exhibits remarkable capabilities in identifying and rectifying spelling mistakes, often surpassing existing systems tailored for the Persian language.
翻译:写作作为人类无处不在的沟通形式,几乎渗透到当代生活的方方面面。因此,书面交流中的不准确或错误可能引发深远后果,从经济损失到危及生命的状况皆有可能。拼写错误作为最常见的写作错误之一,常因多种因素频繁出现。本研究旨在利用神经网络——特别是基于Transformer的双向编码器表征(BERT)掩码语言模型——来识别并纠正文本中的各类拼写错误。为实现这一目标,我们在对拼写错误类型进行分类后,构建了一个涵盖非真实词错误与真实词错误的综合数据集。随后,我们采用了多种预训练的BERT模型。为确保在拼写错误校正中获得最优性能,我们提出了一种结合BERT掩码语言模型与Levenshtein距离的融合方法。评估数据结果表明,本研究所提出的系统在识别与纠正拼写错误方面展现出卓越能力,其表现通常优于现有针对波斯语设计的校正系统。