In exploratory search, users often submit vague queries to investigate unfamiliar topics, but receive limited feedback about how the search engine understood their input. This leads to a self-reinforcing cycle of mismatched results and trial-and-error reformulation. To address this, we study the task of generating user-facing natural language query intent descriptions that surface what the system likely inferred the query to mean, based on post-retrieval evidence. We propose QUIDS, a method that leverages dual-space contrastive learning to isolate intent-relevant information while suppressing irrelevant content. QUIDS combines a dual-encoder representation space with a disentangling decoder that works together to produce concise and accurate intent descriptions. Enhanced by intent-driven hard negative sampling, the model significantly outperforms state-of-the-art baselines across ROUGE, BERTScore, and human/LLM evaluations. Our qualitative analysis confirms QUIDS' effectiveness in generating accurate intent descriptions for exploratory search. Our work contributes to improving the interaction between users and search engines by providing feedback to the user in exploratory search settings. Our code is available at https://github.com/menauwy/QUIDS
翻译:在探索式搜索中,用户常提交模糊查询以探索不熟悉的主题,但获得的关于搜索引擎如何理解其输入的反馈有限。这导致了不匹配结果与试错式查询重构之间的自我强化循环。为解决此问题,我们研究面向用户的自然语言查询意图描述生成任务,该任务基于检索后证据呈现系统可能推断出的查询含义。我们提出QUIDS方法,该方法利用双空间对比学习来分离意图相关信息,同时抑制无关内容。QUIDS结合双编码器表示空间与解纠缠解码器,协同生成简洁准确的意图描述。通过意图驱动的困难负采样增强,该模型在ROUGE、BERTScore及人工/大语言模型评估中显著优于现有先进基线。我们的定性分析证实了QUIDS在探索式搜索中生成准确意图描述的有效性。本研究通过为探索式搜索场景提供用户反馈,有助于改善用户与搜索引擎的交互。代码发布于https://github.com/menauwy/QUIDS。