Aspect Sentiment Understanding (ASU) in interactive scenarios (e.g., Question-Answering and Dialogue) has attracted ever-more interest in recent years and achieved important progresses. However, existing studies on interactive ASU largely ignore the coreference issue for opinion targets (i.e., aspects), while this phenomenon is ubiquitous in interactive scenarios especially dialogues, limiting the ASU performance. Recently, large language models (LLMs) shows the powerful ability to integrate various NLP tasks with the chat paradigm. In this way, this paper proposes a new Chat-based Aspect Sentiment Understanding (ChatASU) task, aiming to explore LLMs' ability in understanding aspect sentiments in dialogue scenarios. Particularly, this ChatASU task introduces a sub-task, i.e., Aspect Chain Reasoning (ACR) task, to address the aspect coreference issue. On this basis, we propose a Trusted Self-reflexion Approach (TSA) with ChatGLM as backbone to ChatASU. Specifically, this TSA treats the ACR task as an auxiliary task to boost the performance of the primary ASU task, and further integrates trusted learning into reflexion mechanisms to alleviate the LLMs-intrinsic factual hallucination problem in TSA. Furthermore, a high-quality ChatASU dataset is annotated to evaluate TSA, and extensive experiments show that our proposed TSA can significantly outperform several state-of-the-art baselines, justifying the effectiveness of TSA to ChatASU and the importance of considering the coreference and hallucination issues in ChatASU.
翻译:交互场景(如问答和对话)中的方面情感理解(ASU)近年来吸引了越来越多的关注,并取得了重要进展。然而,现有交互式ASU研究在很大程度上忽略了意见目标(即方面)的共指问题,而这一现象在交互场景尤其是对话中普遍存在,限制了ASU的性能。近期,大语言模型(LLMs)展现了将各种自然语言处理任务与聊天范式相结合的强大能力。基于此,本文提出了一种新的基于聊天的方面情感理解(ChatASU)任务,旨在探索LLMs在对话场景中理解方面情感的能力。特别地,ChatASU任务引入了一个子任务——方面链推理(ACR)任务,以解决方面共指问题。在此基础上,我们提出了一种以ChatGLM为骨干的可信自我反思方法(TSA),用于ChatASU。具体而言,该TSA将ACR任务作为辅助任务,以提升主ASU任务的性能,并进一步将可信学习融入反思机制中,以缓解TSA中LLMs固有的事实幻觉问题。此外,我们标注了一个高质量的ChatASU数据集以评估TSA,大量实验表明,我们提出的TSA显著优于多个最先进基线,从而验证了TSA对ChatASU的有效性,以及考虑ChatASU中共指和幻觉问题的重要性。