Frame Semantic Parsing (FSP) entails identifying predicates and labeling their arguments according to Frame Semantics. This paper investigates the use of In-Context Learning (ICL) with Large Language Models (LLMs) to perform FSP without model fine-tuning. We propose a method that automatically generates task-specific prompts for the Frame Identification (FI) and Frame Semantic Role Labeling (FSRL) subtasks, relying solely on the FrameNet database. These prompts, constructed from frame definitions and annotated examples, are used to guide six different LLMs. Experiments are conducted on a subset of frames related to violent events. The method achieves competitive results, with F1 scores of 94.3% for FI and 77.4% for FSRL. The findings suggest that ICL offers a practical and effective alternative to traditional fine-tuning for domain-specific FSP tasks.
翻译:框架语义解析(FSP)涉及识别谓词并根据框架语义学标注其论元。本文研究了利用大型语言模型(LLM)的上下文学习(ICL)技术来执行FSP,而无需进行模型微调。我们提出了一种方法,仅依赖FrameNet数据库,为框架识别(FI)和框架语义角色标注(FSRL)两个子任务自动生成任务特定的提示。这些提示基于框架定义和标注示例构建,用于指导六种不同的LLM。实验在涉及暴力事件的相关框架子集上进行。该方法取得了有竞争力的结果,FI的F1分数达到94.3%,FSRL的F1分数达到77.4%。研究结果表明,对于特定领域的FSP任务,ICL提供了一种实用且有效的传统微调替代方案。