Verbs form the backbone of language, providing the structure and meaning to sentences. Yet, their intricate semantic nuances pose a longstanding challenge. Understanding verb relations through the concept of lexical entailment is crucial for comprehending sentence meanings and grasping verb dynamics. This work investigates the capabilities of eight Large Language Models in recognizing lexical entailment relations among verbs through differently devised prompting strategies and zero-/few-shot settings over verb pairs from two lexical databases, namely WordNet and HyperLex. Our findings unveil that the models can tackle the lexical entailment recognition task with moderately good performance, although at varying degree of effectiveness and under different conditions. Also, utilizing few-shot prompting can enhance the models' performance. However, perfectly solving the task arises as an unmet challenge for all examined LLMs, which raises an emergence for further research developments on this topic.
翻译:动词构成语言的支柱,为句子提供结构和意义。然而,其复杂的语义细微差别构成了长期存在的挑战。通过词汇蕴含的概念理解动词关系,对于理解句子含义和把握动词动态至关重要。本研究基于两个词汇数据库(即WordNet和HyperLex)中的动词对,通过不同设计的提示策略和零样本/少样本设置,探究了八种大型语言模型在识别动词间词汇蕴含关系方面的能力。我们的研究结果表明,尽管在不同条件下效果存在差异,这些模型能够以中等偏上的性能处理词汇蕴含识别任务。此外,采用少样本提示可以提升模型的性能。然而,完美解决该任务对所有被考察的LLM而言仍是未实现的挑战,这凸显了该主题需要进一步研究发展的迫切性。