Emotion classification plays a significant role in emotion prediction and harmful content detection. Recent advancements in NLP, particularly through large language models (LLMs), have greatly improved outcomes in this field. This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments in which each comment is classified into 27 fine-grained distinct emotions. To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated under three preprocessing strategies: preserving original emojis with rule-based normalization, converting emojis into textual descriptions, and applying ViSoLex, a model-based lexical normalization system. Results show that converting emojis into text often improves the performance of several BERT-based baselines, while preserving emojis yields the best results for ViSoBERT and CafeBERT. In contrast, removing emojis generally leads to lower performance. ViSoBERT achieved the highest Macro F1-score of 61.50% and Weighted F1-score of 63.26%. Strong performance was also observed from CafeBERT and PhoBERT. These findings highlight that while the proposed corpus can support diverse architectures effectively, preprocessing strategies and annotation quality remain key factors influencing downstream performance.
翻译:情感分类在情感预测与有害内容检测中具有重要作用。自然语言处理领域的最新进展,特别是通过大语言模型,显著提升了该领域的成果。本研究提出了ViGoEmotions——一个包含20,664条社交媒体评论的越南语情感语料库,其中每条评论被划分为27种细粒度的不同情感类别。为评估数据集质量及其对情感分类的影响,本研究在三种预处理策略下评估了八个基于Transformer的预训练模型:保留原始表情符号并进行基于规则的规范化、将表情符号转换为文本描述、以及应用基于模型的词汇规范化系统ViSoLex。结果表明,将表情符号转换为文本通常能提升多种基于BERT的基线模型的性能,而保留表情符号则为ViSoBERT和CafeBERT带来了最佳结果。相比之下,移除表情符号通常会导致性能下降。ViSoBERT取得了最高的宏平均F1分数(61.50%)和加权平均F1分数(63.26%)。CafeBERT和PhoBERT也表现出色。这些发现表明,尽管所提出的语料库能有效支持多种模型架构,但预处理策略与标注质量仍是影响下游性能的关键因素。