Natural Language Inference (NLI) is a task within Natural Language Processing (NLP) that holds value for various AI applications. However, there have been limited studies on Natural Language Inference in Vietnamese that explore the concept of joint models. Therefore, we conducted experiments using various combinations of contextualized language models (CLM) and neural networks. We use CLM to create contextualized work presentations and use Neural Networks for classification. Furthermore, we have evaluated the strengths and weaknesses of each joint model and identified the model failure points in the Vietnamese context. The highest F1 score in this experiment, up to 82.78\% in the benchmark dataset (ViNLI). By conducting experiments with various models, the most considerable size of the CLM is XLM-R (355M). That combination has consistently demonstrated superior performance compared to fine-tuning strong pre-trained language models like PhoBERT (+6.58\%), mBERT (+19.08\%), and XLM-R (+0.94\%) in terms of F1-score. This article aims to introduce a novel approach or model that attains improved performance for Vietnamese NLI. Overall, we find that the joint approach of CLM and neural networks is simple yet capable of achieving high-quality performance, which makes it suitable for applications that require efficient resource utilization.
翻译:自然语言推理(NLI)是自然语言处理(NLP)领域中的一项任务,对多种人工智能应用具有重要价值。然而,针对越南语自然语言推理并探索联合模型概念的研究目前仍较为有限。为此,我们采用多种上下文语言模型(CLM)与神经网络的组合进行了实验。我们使用CLM生成上下文感知的文本表示,并利用神经网络进行分类。此外,我们评估了每种联合模型的优势与不足,并识别了在越南语语境下模型的失效点。本实验在基准数据集(ViNLI)上取得的最高F1分数达到82.78%。通过对多种模型的实验验证,规模最大的CLM是XLM-R(355M参数)。该组合在F1分数上持续表现出优于微调强预训练语言模型的性能,例如相较于PhoBERT(+6.58%)、mBERT(+19.08%)和XLM-R(+0.94%)。本文旨在介绍一种针对越南语NLI任务取得性能提升的新方法或模型。总体而言,我们发现CLM与神经网络的联合方法虽简洁,但能够实现高质量性能,这使其适用于需要高效资源利用的应用场景。