Aspect Sentiment Triplet Extraction (ASTE) aims to co-extract the sentiment triplets in a given corpus. Existing approaches within the pretraining-finetuning paradigm tend to either meticulously craft complex tagging schemes and classification heads, or incorporate external semantic augmentation to enhance performance. In this study, we, for the first time, re-evaluate the redundancy in tagging schemes and the internal enhancement in pretrained representations. We propose a method to improve and utilize pretrained representations by integrating a minimalist tagging scheme and a novel token-level contrastive learning strategy. The proposed approach demonstrates comparable or superior performance compared to state-of-the-art techniques while featuring a more compact design and reduced computational overhead. Additionally, we are the first to formally evaluate GPT-4's performance in few-shot learning and Chain-of-Thought scenarios for this task. The results demonstrate that the pretraining-finetuning paradigm remains highly effective even in the era of large language models.
翻译:方面情感三元组抽取(ASTE)旨在从给定语料中协同抽取情感三元组。现有基于预训练-微调范式的方法通常倾向于精心设计复杂的标注方案与分类头,或引入外部语义增强以提升性能。在本研究中,我们首次重新审视了标注方案中的冗余性以及预训练表征的内部增强潜力。我们提出一种方法,通过整合极简标注方案与新颖的令牌级对比学习策略,以改进并充分利用预训练表征。所提方法在实现与现有先进技术相当或更优性能的同时,具备更紧凑的设计与更低的计算开销。此外,我们首次正式评估了GPT-4在此任务上小样本学习与思维链场景中的表现。结果表明,即便在大语言模型时代,预训练-微调范式依然具有显著效力。