The groundbreaking capabilities of Large Language Models (LLMs) offer new opportunities for enhancing human-computer interaction through emotion-adaptive Artificial Intelligence (AI). However, deliberately controlling the sentiment in these systems remains challenging. The present study investigates the potential of prompt engineering for controlling sentiment in LLM-generated text, providing a resource-sensitive and accessible alternative to existing methods. Using Ekman's six basic emotions (e.g., joy, disgust), we examine various prompting techniques, including Zero-Shot and Chain-of-Thought prompting using gpt-3.5-turbo, and compare it to fine-tuning. Our results indicate that prompt engineering effectively steers emotions in AI-generated texts, offering a practical and cost-effective alternative to fine-tuning, especially in data-constrained settings. In this regard, Few-Shot prompting with human-written examples was the most effective among other techniques, likely due to the additional task-specific guidance. The findings contribute valuable insights towards developing emotion-adaptive AI systems.
翻译:大型语言模型(LLM)的突破性能力为通过情感自适应人工智能(AI)增强人机交互提供了新机遇。然而,在这些系统中精确控制情感仍具挑战性。本研究探讨了提示工程在控制LLM生成文本情感方面的潜力,为现有方法提供了一种资源敏感且易于实现的替代方案。基于Ekman的六种基本情感(如喜悦、厌恶),我们检验了多种提示技术,包括使用gpt-3.5-turbo的零样本提示和思维链提示,并将其与微调方法进行比较。结果表明,提示工程能有效引导AI生成文本中的情感表达,为微调提供了一种实用且经济高效的替代方案,尤其在数据受限场景中。其中,基于人工撰写示例的少样本提示在所有技术中表现最为有效,这很可能得益于其提供的额外任务特定指导。这些发现为开发情感自适应AI系统提供了重要见解。