Open-domain semantic parsing remains a challenging task, as models often rely on heuristics and struggle to handle unseen concepts. In this paper, we investigate the potential of large language models (LLMs) for this task and introduce Retrieval-Augmented Semantic Parsing (RASP), a simple yet effective approach that integrates external lexical knowledge into the parsing process. Our experiments not only show that LLMs outperform previous encoder-decoder baselines for semantic parsing, but that RASP further enhances their ability to predict unseen concepts, nearly doubling the performance of previous models on out-of-distribution concepts. These findings highlight the promise of leveraging large language models and retrieval mechanisms for robust and open-domain semantic parsing.
翻译:开放域语义解析仍是一项具有挑战性的任务,因为现有模型通常依赖启发式方法,且难以处理未见概念。本文探讨了大语言模型在此任务中的潜力,并提出了检索增强语义解析——一种将外部词汇知识整合到解析过程中的简洁而有效的方法。实验结果表明,大语言模型在语义解析任务上不仅超越了以往的编码器-解码器基线模型,而且RASP进一步增强了其预测未见概念的能力,在分布外概念上的性能接近以往模型的两倍。这些发现凸显了利用大语言模型与检索机制实现鲁棒开放域语义解析的广阔前景。