Natural Language Inference (NLI) is a cornerstone of Natural Language Processing (NLP), providing insights into the entailment relationships between text pairings. It is a critical component of Natural Language Understanding (NLU), demonstrating the ability to extract information from spoken or written interactions. NLI is mainly concerned with determining the entailment relationship between two statements, known as the premise and hypothesis. When the premise logically implies the hypothesis, the pair is labeled ``entailment''. If the hypothesis contradicts the premise, the pair receives the ``contradiction'' label. When there is insufficient evidence to establish a connection, the pair is described as ``neutral''. Despite the success of Large Language Models (LLMs) in various tasks, their effectiveness in NLI remains constrained by issues like low-resource domain accuracy, model overconfidence, and difficulty in capturing human judgment disagreements. This study addresses the underexplored area of evaluating LLMs in low-resourced languages such as Bengali. Through a comprehensive evaluation, we assess the performance of prominent LLMs and state-of-the-art (SOTA) models in Bengali NLP tasks, focusing on natural language inference. Utilizing the XNLI dataset, we conduct zero-shot and few-shot evaluations, comparing LLMs like GPT-3.5 Turbo and Gemini 1.5 Pro with models such as BanglaBERT, Bangla BERT Base, DistilBERT, mBERT, and sahajBERT. Our findings reveal that while LLMs can achieve comparable or superior performance to fine-tuned SOTA models in few-shot scenarios, further research is necessary to enhance our understanding of LLMs in languages with modest resources like Bengali. This study underscores the importance of continued efforts in exploring LLM capabilities across diverse linguistic contexts.
翻译:自然语言推理(NLI)是自然语言处理(NLP)的基石,它揭示了文本配对之间的蕴涵关系。作为自然语言理解(NLU)的关键组成部分,NLI展现了从口语或书面交互中提取信息的能力。NLI主要关注判定两个陈述(称为前提和假设)之间的蕴涵关系:当前提逻辑上蕴含假设时,该配对被标记为“蕴涵”;若假设与前提矛盾,则标记为“矛盾”;当缺乏足够证据建立关联时,该配对被描述为“中立”。尽管大型语言模型(LLM)在多种任务中取得了成功,但其在NLI中的有效性仍受到低资源领域准确性、模型过度自信及难以捕捉人类判断分歧等问题的制约。本研究聚焦于低资源语言(如孟加拉语)中LLM评估这一未充分探索的领域。通过全面评估,我们考察了主流LLM及最先进(SOTA)模型在孟加拉语NLP任务(尤其侧重自然语言推理)中的性能表现。基于XNLI数据集,我们开展了零样本和少样本评估,将GPT-3.5 Turbo、Gemini 1.5 Pro等LLM与BanglaBERT、Bangla BERT Base、DistilBERT、mBERT及sahajBERT等模型进行对比。研究结果表明,尽管在少样本场景下LLM能取得与微调SOTA模型相当甚至更优的性能,但为增进对LLM在孟加拉语等资源匮乏语言中的理解,仍需进一步研究。本研究强调了在多元语言背景下持续探索LLM能力的重要性。