Natural Language Inference (NLI) is a task within Natural Language Processing (NLP) that holds value for various AI applications. However, there have been limited studies on Natural Language Inference in Vietnamese that explore the concept of joint models. Therefore, we conducted experiments using various combinations of contextualized language models (CLM) and neural networks. We use CLM to create contextualized work presentations and use Neural Networks for classification. Furthermore, we have evaluated the strengths and weaknesses of each joint model and identified the model failure points in the Vietnamese context. The highest F1 score in this experiment, up to 82.78% in the benchmark dataset (ViNLI). By conducting experiments with various models, the most considerable size of the CLM is XLM-R (355M). That combination has consistently demonstrated superior performance compared to fine-tuning strong pre-trained language models like PhoBERT (+6.58%), mBERT (+19.08%), and XLM-R (+0.94%) in terms of F1-score. This article aims to introduce a novel approach or model that attains improved performance for Vietnamese NLI. Overall, we find that the joint approach of CLM and neural networks is simple yet capable of achieving high-quality performance, which makes it suitable for applications that require efficient resource utilization.
翻译:自然语言推理(NLI)是自然语言处理(NLP)中的一项任务,对多种人工智能应用具有重要价值。然而,针对越南语自然语言推理的联合模型研究仍较为有限。为此,我们采用多种情境化语言模型(CLM)与神经网络的组合进行了实验。我们使用CLM生成情境化词表示,并利用神经网络进行分类。此外,我们评估了各联合模型的优缺点,并识别了在越南语语境下的模型失效点。本实验在基准数据集(ViNLI)上取得的最高F1分数达到82.78%。通过对多种模型的实验验证,规模最大的CLM XLM-R(355M)与神经网络的组合在F1分数上持续表现出优于强预训练语言模型微调的性能,较PhoBERT(+6.58%)、mBERT(+19.08%)和XLM-R(+0.94%)均有提升。本文旨在提出一种针对越南语NLI任务的新方法或模型,以实现性能改进。总体而言,我们发现CLM与神经网络的联合方法虽简洁,却能实现高质量性能,适用于需要高效资源利用的应用场景。