Keyphrase generation (KPG) aims to automatically generate a collection of phrases representing the core concepts of a given document. The dominant paradigms in KPG include one2seq and one2set. Recently, there has been increasing interest in applying large language models (LLMs) to KPG. Our preliminary experiments reveal that it is challenging for a single model to excel in both recall and precision. Further analysis shows that: 1) the one2set paradigm owns the advantage of high recall, but suffers from improper assignments of supervision signals during training; 2) LLMs are powerful in keyphrase selection, but existing selection methods often make redundant selections. Given these observations, we introduce a generate-then-select framework decomposing KPG into two steps, where we adopt a one2set-based model as generator to produce candidates and then use an LLM as selector to select keyphrases from these candidates. Particularly, we make two important improvements on our generator and selector: 1) we design an Optimal Transport-based assignment strategy to address the above improper assignments; 2) we model the keyphrase selection as a sequence labeling task to alleviate redundant selections. Experimental results on multiple benchmark datasets show that our framework significantly surpasses state-of-the-art models, especially in absent keyphrase prediction.
翻译:关键词生成(KPG)旨在自动生成一组代表给定文档核心概念的短语。KPG中的主流范式包括one2seq和one2set。近来,将大语言模型(LLM)应用于KPG的研究兴趣日益增长。我们的初步实验表明,单一模型难以同时在召回率和精确率上表现出色。进一步分析发现:1)one2set范式具有高召回率的优势,但在训练过程中存在监督信号分配不当的问题;2)LLM在关键词选择方面能力强大,但现有选择方法常产生冗余选择。基于这些观察,我们提出了一个“生成-选择”框架,将KPG分解为两个步骤:首先采用基于one2set的模型作为生成器来产生候选关键词,然后使用LLM作为选择器从这些候选中筛选关键词。特别地,我们对生成器和选择器进行了两项重要改进:1)设计了基于最优传输的分配策略以解决上述监督信号分配不当问题;2)将关键词选择建模为序列标注任务以缓解冗余选择。在多个基准数据集上的实验结果表明,我们的框架显著超越了现有最先进模型,尤其在缺失关键词预测方面表现突出。