Dense retrieval (DR) converts queries and documents into dense embeddings and measures the similarity between queries and documents in vector space. One of the challenges in DR is the lack of domain-specific training data. While DR models can learn from large-scale public datasets like MS MARCO through transfer learning, evidence shows that not all DR models and domains can benefit from transfer learning equally. Recently, some researchers have resorted to large language models (LLMs) to improve the zero-shot and few-shot DR models. However, the hard prompts or human-written prompts utilized in these works cannot guarantee the good quality of generated weak queries. To tackle this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task, we leverage soft prompt-tuning to optimize a task-specific soft prompt on limited ground truth data and then prompt the LLMs to tag unlabeled documents with weak queries, yielding enough weak document-query pairs to train task-specific dense retrievers. We design a filter to select high-quality example document-query pairs in the prompt to further improve the quality of weak tagged queries. To the best of our knowledge, there is no prior work utilizing soft prompt tuning to augment DR models. The experiments demonstrate that SPTAR outperforms the unsupervised baselines BM25 and the recently proposed LLMs-based augmentation method for DR.
翻译:稠密检索(DR)将查询和文档转换为稠密嵌入,并在向量空间中度量查询与文档之间的相似性。DR面临的挑战之一在于缺乏领域特定的训练数据。虽然DR模型可以通过迁移学习从MS MARCO等大规模公共数据集中学习,但有证据表明并非所有DR模型和领域都能均等地受益于迁移学习。近期,部分研究者借助大语言模型(LLMs)来改进零样本和少样本DR模型。然而,这些工作中使用的硬提示或人工编写提示无法保证生成弱查询的质量。为解决此问题,我们提出用于增强DR的软提示调优方法(SPTAR):针对每项任务,我们利用软提示调优在有限的真实标注数据上优化任务特定的软提示,随后通过提示LLMs为未标注文档生成弱查询标签,从而产生足量的弱文档-查询对以训练任务特定的稠密检索器。我们设计了一种过滤器,用于选择提示中的高质量示例文档-查询对,以进一步提升弱标注查询的质量。据我们所知,目前尚无利用软提示调优增强DR模型的先例。实验表明,SPTAR在性能上优于无监督基线BM25以及近期提出的基于LLMs的DR增强方法。