Medical dialogue generation (MDG) has gained increasing attention due to its substantial practical value. Previous works typically employ a sequence-to-sequence framework to generate medical responses by modeling dialogue context as sequential text with annotated medical entities. While these methods have been successful in generating fluent responses, they fail to provide process explanations of reasoning and require extensive entity annotation. To address these limitations, we propose the method Bootstrap Prompting for Explicit Reasoning in MDG (BP4ER), which explicitly model MDG's multi-step reasoning process and iteratively enhance this reasoning process. We employ a least-to-most prompting strategy to guide a large language model (LLM) in explicit reasoning, breaking down MDG into simpler sub-questions. These sub-questions build on answers from previous ones. Additionally, we also introduce two distinct bootstrapping techniques for prompting, which autonomously correct errors and facilitate the LLM's explicit reasoning. This approach eliminates the need for entity annotation and increases the transparency of the MDG process by explicitly generating the intermediate reasoning chain. The experimental findings on the two public datasets indicate that BP4ER outperforms state-of-the-art methods in terms of both objective and subjective evaluation metrics.
翻译:医疗对话生成因其重要的实用价值而受到日益关注。以往研究通常采用序列到序列框架,通过将对话语境建模为带有标注医疗实体的序列文本来生成医疗响应。尽管这些方法在生成流畅对话方面取得成效,但未能提供推理过程解释,且需要大量实体标注。为克服这些局限,我们提出BP4ER方法,该方法显式建模医疗对话生成的多步骤推理过程,并迭代增强该推理过程。我们采用"由简至繁"的提示策略引导大语言模型进行显式推理,将医疗对话生成分解为更简单的子问题,这些子问题基于先前问题的答案构建。此外,我们引入两种不同的自举提示技术,自主修正错误并促进大语言模型的显式推理。该方法无需实体标注,并通过显式生成中间推理链提升了医疗对话生成过程的透明度。在两个公开数据集上的实验结果表明,BP4ER在客观与主观评价指标上均优于现有最优方法。