While sentiment analysis has advanced from sentence to aspect-level, i.e., the identification of concrete terms related to a sentiment, the equivalent field of Aspect-based Emotion Analysis (ABEA) is faced with dataset bottlenecks and the increased complexity of emotion classes in contrast to binary sentiments. This paper addresses these gaps, by generating a first ABEA training dataset, consisting of 2,621 English Tweets, and fine-tuning a BERT-based model for the ABEA sub-tasks of Aspect Term Extraction (ATE) and Aspect Emotion Classification (AEC). The dataset annotation process was based on the hierarchical emotion theory by Shaver et al. [1] and made use of group annotation and majority voting strategies to facilitate label consistency. The resulting dataset contained aspect-level emotion labels for Anger, Sadness, Happiness, Fear, and a None class. Using the new ABEA training dataset, the state-of-the-art ABSA model GRACE by Luo et al. [2] was fine-tuned for ABEA. The results reflected a performance plateau at an F1-score of 70.1% for ATE and 46.9% for joint ATE and AEC extraction. The limiting factors for model performance were broadly identified as the small training dataset size coupled with the increased task complexity, causing model overfitting and limited abilities to generalize well on new data.
翻译:尽管情感分析已从句子层面发展到方面层面,即识别与情感相关的具体术语,但相应的基于方面的情感分析(ABEA)领域却面临数据集瓶颈,且与二元情感相比,其情感类别复杂度更高。本文通过构建首个ABEA训练数据集(包含2,621条英文推文),并针对ABEA的子任务——方面术语抽取(ATE)与方面情感分类(AEC)微调基于BERT的模型,以解决上述问题。数据集标注过程基于Shaver等人[1]提出的层次情感理论,并采用群体标注与多数投票策略以确保标签一致性。最终数据集包含愤怒、悲伤、快乐、恐惧四种情感层面标签及"无情感"类别。利用新建的ABEA训练数据集,我们对Luo等人[2]提出的前沿ABSA模型GRACE进行了ABEA任务微调。实验结果显示,ATE任务的F1分数达到70.1%的性能平台,而联合ATE与AEC抽取的F1分数为46.9%。模型性能的主要限制因素可归结为训练数据集规模较小与任务复杂度提升的双重影响,导致模型过拟合及对新数据的泛化能力受限。