Task-oriented dialogues must maintain consistency both within the dialogue itself, ensuring logical coherence across turns, and with the conversational domain, accurately reflecting external knowledge. We propose to conceptualize dialogue consistency as a Constraint Satisfaction Problem (CSP), wherein variables represent segments of the dialogue referencing the conversational domain, and constraints among variables reflect dialogue properties, including linguistic, conversational, and domain-based aspects. To demonstrate the feasibility of the approach, we utilize a CSP solver to detect inconsistencies in dialogues re-lexicalized by an LLM. Our findings indicate that: (i) CSP is effective to detect dialogue inconsistencies; and (ii) consistent dialogue re-lexicalization is challenging for state-of-the-art LLMs, achieving only a 0.15 accuracy rate when compared to a CSP solver. Furthermore, through an ablation study, we reveal that constraints derived from domain knowledge pose the greatest difficulty in being respected. We argue that CSP captures core properties of dialogue consistency that have been poorly considered by approaches based on component pipelines.
翻译:任务导向对话必须保持对话内部的一致性,确保轮次间的逻辑连贯性,同时还需保持与会话领域的一致性,准确反映外部知识。我们提出将对话一致性概念化为一个约束满足问题,其中变量代表对话中涉及会话领域的片段,变量间的约束则反映对话属性,包括语言、会话及基于领域的各个方面。为验证该方法的可行性,我们利用CSP求解器检测由大语言模型重新词汇化的对话中的不一致性。研究结果表明:(i)CSP能有效检测对话不一致性;(ii)当前最先进的大语言模型难以实现一致的对话重新词汇化,与CSP求解器相比仅达到0.15的准确率。此外,通过消融实验我们发现,源自领域知识的约束条件最难以被满足。我们认为,CSP捕捉了对话一致性的核心属性,而这些属性在基于组件流水线的方法中未能得到充分考量。