In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zeroshot and few-shot capabilities across NLP tasks. This paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine "chain-of-thought" (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. Our results indicate that LLMs that are suitably fine-tuned and have sufficient reasoning capabilities can be leveraged for automated dialogue evaluation.
翻译:在面向任务的对话式人工智能评估中,无监督方法与人类判断的相关性较差,而监督方法则缺乏泛化能力。大型语言模型(LLMs)的最新进展显示其在多种自然语言处理任务中具备强大的零样本和少样本能力。本文探索利用LLMs进行自动化对话质量评估,并在公开和专有数据集上实验了多种配置。通过操控模型规模、上下文示例及选择技术等因素,我们研究了“思维链”(CoT)推理和标签提取过程。我们的结果表明:(1)更大的模型能产生更准确的对话标签;(2)通过算法选择上下文示例优于随机选择;(3)要求LLM在输出最终标签前提供理由的CoT推理能提升性能;(4)经过微调的LLMs优于未经调整的模型。我们的研究结果表明,经过适当微调且具备充分推理能力的LLMs可用于自动化对话评估。