Large language models (LLMs) have significantly advanced Natural Language Processing (NLP) tasks in recent years. However, their universal nature poses limitations in scenarios requiring personalized responses, such as recommendation systems and chatbots. This paper investigates methods to personalize LLMs, comparing fine-tuning and zero-shot reasoning approaches on subjective tasks. Results demonstrate that personalized fine-tuning improves model reasoning compared to non-personalized models. Experiments on datasets for emotion recognition and hate speech detection show consistent performance gains with personalized methods across different LLM architectures. These findings underscore the importance of personalization for enhancing LLM capabilities in subjective text perception tasks.
翻译:近年来,大型语言模型(LLMs)显著推动了自然语言处理(NLP)任务的发展。然而,其通用性在需要个性化响应的场景(如推荐系统和聊天机器人)中存在局限。本文研究了使LLMs个性化的方法,在主观任务上比较了微调与零样本推理两种途径。结果表明,相较于非个性化模型,个性化微调能提升模型的推理能力。在情感识别和仇恨言论检测数据集上的实验表明,个性化方法在不同LLM架构中均能带来持续的性能提升。这些发现强调了个性化对于增强LLM在主观文本感知任务中能力的重要性。