This paper explores the application of a simple weighted loss function to Transformer-based models for multi-label emotion detection in SemEval-2025 Shared Task 11. Our approach addresses data imbalance by dynamically adjusting class weights, thereby enhancing performance on minority emotion classes without the computational burden of traditional resampling methods. We evaluate BERT, RoBERTa, and BART on the BRIGHTER dataset, using evaluation metrics such as Micro F1, Macro F1, ROC-AUC, Accuracy, and Jaccard similarity coefficients. The results demonstrate that the weighted loss function improves performance on high-frequency emotion classes but shows limited impact on minority classes. These findings underscore both the effectiveness and the challenges of applying this approach to imbalanced multi-label emotion detection.
翻译:本文探讨了在SemEval-2025共享任务11中,将一种简单的加权损失函数应用于基于Transformer的模型进行多标签情感检测的方法。我们的方法通过动态调整类别权重来解决数据不平衡问题,从而在不增加传统重采样方法计算负担的情况下,提升对少数情感类别的检测性能。我们在BRIGHTER数据集上评估了BERT、RoBERTa和BART模型,使用了Micro F1、Macro F1、ROC-AUC、准确率和Jaccard相似系数等评估指标。结果表明,加权损失函数提高了对高频情感类别的检测性能,但对少数类别的影响有限。这些发现既凸显了该方法在不平衡多标签情感检测中的有效性,也揭示了其面临的挑战。