Multi-turn intent classification is notably challenging due to the complexity and evolving nature of conversational contexts. This paper introduces LARA, a Linguistic-Adaptive Retrieval-Augmentation framework to enhance accuracy in multi-turn classification tasks across six languages, accommodating a large number of intents in chatbot interactions. LARA combines a fine-tuned smaller model with a retrieval-augmented mechanism, integrated within the architecture of LLMs. The integration allows LARA to dynamically utilize past dialogues and relevant intents, thereby improving the understanding of the context. Furthermore, our adaptive retrieval techniques bolster the cross-lingual capabilities of LLMs without extensive retraining and fine-tuning. Comprehensive experiments demonstrate that LARA achieves state-of-the-art performance on multi-turn intent classification tasks, enhancing the average accuracy by 3.67\% from state-of-the-art single-turn intent classifiers.
翻译:多轮意图分类因对话语境的复杂性和动态演变特性而极具挑战性。本文提出LARA,一种语言自适应检索增强框架,旨在提升跨六种语言、涵盖大量聊天机器人交互意图的多轮分类任务准确率。LARA将微调后的小型模型与检索增强机制相结合,并集成于大语言模型(LLM)架构之中。该集成使LARA能够动态利用历史对话及相关意图,从而增强对上下文的理解。此外,我们的自适应检索技术在无需大量重训练与微调的情况下,有效强化了LLM的跨语言能力。综合实验表明,LARA在多轮意图分类任务中取得了最先进的性能,相较于当前最优的单轮意图分类器,平均准确率提升了3.67%。