Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, and number of demonstrations. As increasing the number of demonstrations in ICL does not improve accuracy despite using a long prompt, we propose a novel method called LLMSRec-Syn that incorporates multiple demonstration users into one aggregated demonstration. Our experiments on three recommendation datasets show that LLMSRec-Syn outperforms state-of-the-art LLM-based sequential recommendation methods. In some cases, LLMSRec-Syn can perform on par with or even better than supervised learning methods. Our code is publicly available at https://github.com/demoleiwang/LLMSRec_Syn.
翻译:大语言模型(LLMs)已在各类自然语言处理任务中展现出卓越性能。为将LLMs应用于强序列推荐场景,我们探索了上下文学习在序列推荐中的实现方法。重点研究了指令格式、任务一致性、演示选择策略及演示数量对模型效果的影响。针对上下文学习中增加演示数量虽延长提示长度却无法提升准确率的问题,我们提出创新方法LLMSRec-Syn,通过将多个演示用户整合为单一聚合演示。在三个推荐数据集上的实验表明,LLMSRec-Syn的性能优于当前最先进的基于LLM的序列推荐方法。在某些场景下,LLMSRec-Syn甚至能达到或超越监督学习方法的性能。我们的代码已开源至 https://github.com/demoleiwang/LLMSRec_Syn。