This study aims to develop models that generate corpus informed clarifying questions for web search, in a way that ensures the questions align with the available information in the retrieval corpus. We demonstrate the effectiveness of Retrieval Augmented Language Models (RAG) in this process, emphasising their ability to (i) jointly model the user query and retrieval corpus to pinpoint the uncertainty and ask for clarifications end-to-end and (ii) model more evidence documents, which can be used towards increasing the breadth of the questions asked. However, we observe that in current datasets search intents are largely unsupported by the corpus, which is problematic both for training and evaluation. This causes question generation models to ``hallucinate'', ie. suggest intents that are not in the corpus, which can have detrimental effects in performance. To address this, we propose dataset augmentation methods that align the ground truth clarifications with the retrieval corpus. Additionally, we explore techniques to enhance the relevance of the evidence pool during inference, but find that identifying ground truth intents within the corpus remains challenging. Our analysis suggests that this challenge is partly due to the bias of current datasets towards clarification taxonomies and calls for data that can support generating corpus-informed clarifications.
翻译:本研究旨在开发能够生成基于语料库的澄清问题的模型,用于网络搜索,确保所生成的问题与检索语料库中的可用信息保持一致。我们论证了检索增强语言模型在此过程中的有效性,强调其具备以下能力:(i) 联合建模用户查询与检索语料库,以端到端方式定位不确定性并请求澄清;(ii) 建模更多证据文档,从而扩展所提问题的覆盖范围。然而,我们观察到当前数据集中搜索意图大多缺乏语料库支持,这对训练和评估均造成问题。这导致问题生成模型出现“幻觉”,即提出语料库中不存在的意图,从而对性能产生不利影响。为解决此问题,我们提出了数据集增强方法,将真实标注的澄清问题与检索语料库对齐。此外,我们探索了在推理阶段提升证据池相关性的技术,但发现识别语料库中的真实意图仍然具有挑战性。我们的分析表明,这一挑战部分源于当前数据集对澄清分类体系的偏向性,因此需要能够支持生成基于语料库的澄清问题的数据。