Product review generation is an important task in recommender systems, which could provide explanation and persuasiveness for the recommendation. Recently, Large Language Models (LLMs, e.g., ChatGPT) have shown superior text modeling and generating ability, which could be applied in review generation. However, directly applying the LLMs for generating reviews might be troubled by the ``polite'' phenomenon of the LLMs and could not generate personalized reviews (e.g., negative reviews). In this paper, we propose Review-LLM that customizes LLMs for personalized review generation. Firstly, we construct the prompt input by aggregating user historical behaviors, which include corresponding item titles and reviews. This enables the LLMs to capture user interest features and review writing style. Secondly, we incorporate ratings as indicators of satisfaction into the prompt, which could further improve the model's understanding of user preferences and the sentiment tendency control of generated reviews. Finally, we feed the prompt text into LLMs, and use Supervised Fine-Tuning (SFT) to make the model generate personalized reviews for the given user and target item. Experimental results on the real-world dataset show that our fine-tuned model could achieve better review generation performance than existing close-source LLMs.
翻译:产品评论生成是推荐系统中的一项重要任务,可为推荐结果提供解释性与说服力。近年来,大语言模型(如ChatGPT)展现出卓越的文本建模与生成能力,可应用于评论生成任务。然而,直接运用大语言模型生成评论可能受其"礼貌性"现象困扰,且难以生成个性化评论(如负面评价)。本文提出Review-LLM框架,通过定制化大语言模型实现个性化评论生成。首先,我们通过聚合用户历史行为(包含对应商品标题与评论)构建提示输入,使大语言模型能够捕捉用户兴趣特征与评论写作风格。其次,将评分作为满意度指标融入提示模板,进一步增强模型对用户偏好的理解及生成评论的情感倾向控制。最后,将提示文本输入大语言模型,并采用监督微调技术使模型能为特定用户与目标商品生成个性化评论。在真实数据集上的实验结果表明,经微调的模型相较于现有闭源大语言模型能取得更优的评论生成性能。