Large language models (LLMs) must often respond to highly ambiguous user requests. In such cases, the LLM's best response may be to ask a clarifying question to elicit more information. Existing LLMs often respond by presupposing a single interpretation of such ambiguous requests, frustrating users who intended a different interpretation. We speculate this is caused by current preference data labeling practice, where LLM responses are evaluated only on their prior contexts. To address this, we assign preference labels by simulating their expected outcomes in future turns. This allows LLMs to learn to ask clarifying questions when it can generate responses that are tailored to each user interpretation in future turns. On open-domain QA datasets with multiple annotations, we evaluate systems based on their ability to ask clarifying questions to recover each user's interpretation and expected answer. We compare systems trained using our proposed preference labeling methods against standard methods, which assign preferences based on only prior context. Our method achieves a 5% improvement in F1 measured against the answer set from different interpretations of each query, showing the value of modeling future conversation turns. We further demonstrate that our method can be used to train models to judiciously determine when to ask clarifying questions, directly answering the question when clarification is unnecessary. In our experiments, we find that our method achieves a 3% improvement in accuracy of such judgments over existing methods.
翻译:大语言模型(LLMs)常需应对高度模糊的用户请求。在此类情况下,模型的最佳回应方式可能是提出澄清性问题以获取更多信息。现有的大语言模型往往通过预设对模糊请求的单一解释进行回应,这令持有不同理解意图的用户感到困扰。我们推测这一现象源于当前偏好数据标注的实践方式,即仅依据历史上下文对大语言模型的回应进行评估。为解决该问题,我们通过模拟回应在后续对话轮次中的预期效果来分配偏好标签。这种方法使得大语言模型能够学会在后续轮次中针对不同用户理解生成定制化回应时,主动提出澄清性问题。我们在具有多重标注的开放域问答数据集上,评估了系统通过提出澄清性问题以还原每位用户的理解意图与预期答案的能力。我们将采用本文提出的偏好标注方法训练的系统,与仅基于历史上下文分配偏好的标准方法进行对比。实验表明,针对同一查询的不同理解所对应的答案集合,我们的方法在F1分数上实现了5%的提升,这验证了建模未来对话轮次的价值。我们进一步证明,该方法可用于训练模型审慎判断何时需要提出澄清性问题——在无需澄清时直接给出回答。实验结果显示,在此类判断的准确率上,我们的方法较现有方法提升了3%。