Transformer models have significantly advanced the field of emotion recognition. However, there are still open challenges when exploring open-ended queries for Large Language Models (LLMs). Although current models offer good results, automatic emotion analysis in open texts presents significant challenges, such as contextual ambiguity, linguistic variability, and difficulty interpreting complex emotional expressions. These limitations make the direct application of generalist models difficult. Accordingly, this work compares the effectiveness of fine-tuning and prompt engineering in emotion detection in three distinct scenarios: (i) performance of fine-tuned pre-trained models and general-purpose LLMs using simple prompts; (ii) effectiveness of different emotion prompt designs with LLMs; and (iii) impact of emotion grouping techniques on these models. Experimental tests attain metrics above 70% with a fine-tuned pre-trained model for emotion recognition. Moreover, the findings highlight that LLMs require structured prompt engineering and emotion grouping to enhance their performance. These advancements improve sentiment analysis, human-computer interaction, and understanding of user behavior across various domains.
翻译:Transformer模型显著推动了情感识别领域的发展。然而,在探索大型语言模型(LLMs)处理开放式查询时,仍存在诸多开放挑战。尽管现有模型能提供良好结果,但针对开放文本的自动情感分析仍面临重大挑战,例如上下文歧义、语言变异性以及复杂情感表达的理解困难。这些限制使得通用模型的直接应用变得困难。因此,本研究在三种不同场景下比较了微调与提示工程在情感检测中的有效性:(i)使用简单提示时,微调预训练模型与通用LLMs的性能表现;(ii)不同情感提示设计对LLMs的有效性;(iii)情感分组技术对这些模型的影响。实验测试表明,经微调的预训练模型在情感识别任务中取得了超过70%的评估指标。此外,研究结果强调LLMs需要结构化提示工程和情感分组技术以提升其性能。这些进展有助于改进情感分析、人机交互及跨领域用户行为的理解。