Generative retrieval models encode pointers to information in a corpus as an index within the model's parameters. These models serve as part of a larger pipeline, where retrieved information conditions generation for knowledge-intensive NLP tasks. However, we identify two limitations: the generative retrieval does not account for contextual information. Secondly, the retrieval can't be tuned for the downstream readers as decoding the page title is a non-differentiable operation. This paper introduces Re3val, trained with generative reranking and reinforcement learning using limited data. Re3val leverages context acquired via Dense Passage Retrieval to rerank the retrieved page titles and utilizes REINFORCE to maximize rewards generated by constrained decoding. Additionally, we generate questions from our pre-training dataset to mitigate epistemic uncertainty and bridge the domain gap between the pre-training and fine-tuning datasets. Subsequently, we extract and rerank contexts from the KILT database using the rerank page titles. Upon grounding the top five reranked contexts, Re3val demonstrates the Top 1 KILT scores compared to all other generative retrieval models across five KILT datasets.
翻译:生成式检索模型将指向语料库中信息的指针编码为模型参数内部的索引。这些模型作为更大流程的一部分,其中检索到的信息为知识密集型自然语言处理任务的生成提供条件。然而,我们发现两个局限性:生成式检索未考虑上下文信息。其次,由于解码页面标题是一个不可微的操作,检索无法针对下游阅读器进行调优。本文提出Re3val,该模型利用有限数据通过生成式重排序和强化学习进行训练。Re3val利用通过密集段落检索获取的上下文对检索到的页面标题进行重排序,并采用REINFORCE算法以最大化约束解码生成的奖励。此外,我们从预训练数据集中生成问题以缓解认知不确定性并弥合预训练数据集与微调数据集之间的领域差距。随后,我们使用重排序后的页面标题从KILT数据库中提取并重排序上下文。在基于前五个重排序上下文后,Re3val在五个KILT数据集上相较于所有其他生成式检索模型取得了最高的Top 1 KILT分数。