Personalized conversational information retrieval (CIR) combines conversational and personalizable elements to satisfy various users' complex information needs through multi-turn interaction based on their backgrounds. The key promise is that the personal textual knowledge base (PTKB) can improve the CIR effectiveness because the retrieval results can be more related to the user's background. However, PTKB is noisy: not every piece of knowledge in PTKB is relevant to the specific query at hand. In this paper, we explore and test several ways to select knowledge from PTKB and use it for query reformulation by using a large language model (LLM). The experimental results show the PTKB might not always improve the search results when used alone, but LLM can help generate a more appropriate personalized query when high-quality guidance is provided.
翻译:个性化对话信息检索结合了对话式与个性化元素,通过基于用户背景的多轮交互来满足不同用户的复杂信息需求。其核心优势在于个人文本知识库能够提升检索效果,使检索结果更贴合用户背景。然而,个人文本知识库存在噪声问题:并非其中的每项知识都与当前特定查询相关。本文探索并测试了多种从个人文本知识库中筛选知识,并利用大语言模型进行查询重构的方法。实验结果表明,单独使用个人文本知识库未必总能改善搜索结果,但在提供高质量引导时,大语言模型能够帮助生成更恰当的个性化查询。