Large language models (LLMs) must often respond to highly ambiguous user requests. In such cases, the LLM's best response may be to ask a clarifying question to elicit more information. We observe existing LLMs often respond by presupposing a single interpretation of such ambiguous requests, frustrating users who intended a different interpretation. We speculate this is caused by current preference data labeling practice, where LLM responses are evaluated only on their prior contexts. To address this, we propose to assign preference labels by simulating their expected outcomes in the future turns. This allows LLMs to learn to ask clarifying questions when it can generate responses that are tailored to each user interpretation in future turns. In experiments on open-domain QA, we compare systems that trained using our proposed preference labeling methods against standard methods, which assign preferences based on only prior context. We evaluate systems based on their ability to ask clarifying questions that can recover each user's interpretation and expected answer, and find that our training with our proposed method trains LLMs to ask clarifying questions with a 5% improvement in F1 measured against the answer set from different interpretations of each query
翻译:大语言模型(LLMs)常需应对高度模糊的用户请求。在此类情况下,模型的最佳回应方式可能是提出澄清性问题以获取更多信息。我们观察到,现有大语言模型往往基于对模糊请求的单一预设理解进行回应,这使持有不同意图的用户感到困扰。我们推测此现象源于当前偏好数据标注的实践方式,即仅依据历史上下文评估大语言模型的回应。为解决该问题,我们提出通过模拟模型回应在未来对话轮次中的预期效果来分配偏好标签。该方法使得大语言模型能够学会在后续轮次中针对不同用户意图生成定制化回应时,主动提出澄清性问题。在开放域问答实验中,我们比较了采用本文提出的偏好标注方法与仅基于历史上下文分配偏好的标准方法所训练的系统。我们依据系统提出澄清性问题以还原每位用户意图与预期答案的能力进行评估,发现采用本文方法训练的大语言模型在提出澄清性问题方面表现更优,其在不同查询解释对应答案集上的F1分数提升了5%。