Metaphor detection, a critical task in natural language processing, involves identifying whether a particular word in a sentence is used metaphorically. Traditional approaches often rely on supervised learning models that implicitly encode semantic relationships based on metaphor theories. However, these methods often suffer from a lack of transparency in their decision-making processes, which undermines the reliability of their predictions. Recent research indicates that LLMs (large language models) exhibit significant potential in metaphor detection. Nevertheless, their reasoning capabilities are constrained by predefined knowledge graphs. To overcome these limitations, we propose DMD, a novel dual-perspective framework that harnesses both implicit and explicit applications of metaphor theories to guide LLMs in metaphor detection and adopts a self-judgment mechanism to validate the responses from the aforementioned forms of guidance. In comparison to previous methods, our framework offers more transparent reasoning processes and delivers more reliable predictions. Experimental results prove the effectiveness of DMD, demonstrating state-of-the-art performance across widely-used datasets.
翻译:隐喻检测是自然语言处理中的关键任务,旨在识别句子中特定词汇是否被隐喻性使用。传统方法通常依赖监督学习模型,这些模型基于隐喻理论隐式编码语义关系。然而,这些方法的决策过程往往缺乏透明度,从而降低了预测结果的可靠性。近期研究表明,大语言模型在隐喻检测中展现出显著潜力,但其推理能力受限于预定义的知识图谱。为克服这些局限,我们提出DMD——一种新颖的双视角框架,该框架通过隐喻理论的隐式与显式应用来引导大语言模型进行隐喻检测,并采用自判断机制对前述引导形式的响应进行验证。相较于现有方法,本框架提供了更透明的推理过程与更可靠的预测结果。实验数据证明了DMD的有效性,其在多个广泛使用的数据集上均实现了最先进的性能表现。