We train a bilingual Arabic-Hebrew language model using a transliterated version of Arabic texts in Hebrew, to ensure both languages are represented in the same script. Given the morphological, structural similarities, and the extensive number of cognates shared among Arabic and Hebrew, we assess the performance of a language model that employs a unified script for both languages, on machine translation which requires cross-lingual knowledge. The results are promising: our model outperforms a contrasting model which keeps the Arabic texts in the Arabic script, demonstrating the efficacy of the transliteration step. Despite being trained on a dataset approximately 60% smaller than that of other existing language models, our model appears to deliver comparable performance in machine translation across both translation directions.
翻译:我们训练了一个阿拉伯语-希伯来语双语语言模型,该模型使用希伯来语转写的阿拉伯语文本,以确保两种语言以同一书写系统表示。鉴于阿拉伯语和希伯来语在形态、结构上的相似性以及大量同源词的存在,我们评估了使用统一书写系统的语言模型在需要跨语言知识的机器翻译任务中的性能。结果令人鼓舞:我们的模型优于将阿拉伯语文本保留在阿拉伯语书写系统中的对比模型,证明了转写步骤的有效性。尽管训练数据集比现有其他语言模型约小60%,但我们的模型在机器翻译的两个翻译方向上均表现出可比的性能。