Integrating machine learning (ML) into customer service chatbots enhances their ability to understand and respond to user queries, ultimately improving service performance. However, they may appear artificial to some users and affecting customer experience. Hence, meticulous evaluation of ML models for each pipeline component is crucial for optimizing performance, though differences in functionalities can lead to unfair comparisons. In this paper, we present a tailored experimental evaluation approach for goal-oriented customer service chatbots with pipeline architecture, focusing on three key components: Natural Language Understanding (NLU), dialogue management (DM), and Natural Language Generation (NLG). Our methodology emphasizes individual assessment to determine optimal ML models. Specifically, we focus on optimizing hyperparameters and evaluating candidate models for NLU (utilizing BERT and LSTM), DM (employing DQN and DDQN), and NLG (leveraging GPT-2 and DialoGPT). The results show that for the NLU component, BERT excelled in intent detection whereas LSTM was superior for slot filling. For the DM component, the DDQN model outperformed DQN by achieving fewer turns, higher rewards, as well as greater success rates. For NLG, the large language model GPT-2 surpassed DialoGPT in BLEU, METEOR, and ROUGE metrics. These findings aim to provide a benchmark for future research in developing and optimizing customer service chatbots, offering valuable insights into model performance and optimal hyperparameters.
翻译:将机器学习(ML)集成到客服聊天机器人中,可增强其理解和响应用户查询的能力,从而提升服务性能。然而,这类机器人可能给部分用户留下不自然的印象,影响客户体验。因此,对管道架构中各组件的机器学习模型进行细致评估,对优化性能至关重要,尽管功能差异可能导致不公平的比较。本文提出了一种针对管道架构目标导向客服聊天机器人的定制化实验评估方法,重点关注三个关键组件:自然语言理解(NLU)、对话管理(DM)和自然语言生成(NLG)。我们的方法强调通过独立评估来确定最优的机器学习模型。具体而言,我们聚焦于优化超参数并评估NLU(采用BERT和LSTM)、DM(采用DQN和DDQN)以及NLG(采用GPT-2和DialoGPT)的候选模型。结果表明,在NLU组件中,BERT在意图检测方面表现优异,而LSTM在槽填充任务上更胜一筹。在DM组件中,DDQN模型在更少的对话轮次、更高的奖励以及更高的成功率方面均优于DQN模型。在NLG方面,大语言模型GPT-2在BLEU、METEOR和ROUGE指标上均超越了DialoGPT。这些发现旨在为未来开发和优化客服聊天机器人的研究提供一个基准,为模型性能及最优超参数的选择提供有价值的见解。