Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input and then fine-tuning to align these features with LLM's input space. Although effective, in this work, we identify two limitations when adapting LLMs to recommendation tasks, which hinder the integration of general knowledge and collaborative information, resulting in sub-optimal recommendation performance. (1) Fine-tuning LLM with recommendation data can undermine its inherent world knowledge and fundamental competencies, which are crucial for interpreting and inferring recommendation text. (2) Incorporating collaborative features into textual prompts disrupts the semantics of the original prompts, preventing LLM from generating appropriate outputs. In this paper, we propose a new paradigm, CoRA (an acronym for Collaborative LoRA), with a collaborative weights generator. Rather than input space alignment, this method aligns collaborative information with LLM's parameter space, representing them as incremental weights to update LLM's output. This way, LLM perceives collaborative information without altering its general knowledge and text inference capabilities. Specifically, we employ a collaborative filtering model to extract user and item embeddings, converting them into collaborative weights with low-rank properties through the collaborative weights generator. We then merge the collaborative weights into LLM's weights, enabling LLM to perceive the collaborative signals and generate personalized recommendations without fine-tuning or extra collaborative tokens in prompts. Extensive experiments confirm that CoRA effectively integrates collaborative information into LLM, enhancing recommendation performance.
翻译:在大语言模型(LLMs)中引入协同信息是将其适配于推荐任务的一种前景广阔的技术。现有方法通过将协同特征与文本标记拼接为统一的序列输入,再通过微调使这些特征与LLM的输入空间对齐来实现这一目标。尽管有效,本研究发现将LLM适配到推荐任务时存在两个局限性,阻碍了通用知识与协同信息的融合,导致推荐性能未能达到最优:(1)使用推荐数据微调LLM可能损害其固有的世界知识与基础能力,而这些能力对于解析和推理推荐文本至关重要;(2)将协同特征融入文本提示会破坏原始提示的语义结构,导致LLM难以生成合适的输出。本文提出一种新范式CoRA(协同LoRA的缩写),其包含一个协同权重生成器。该方法并非在输入空间进行对齐,而是将协同信息与LLM的参数空间对齐,将其表示为增量权重以更新LLM的输出。通过这种方式,LLM能够感知协同信息,同时保持其通用知识与文本推理能力不变。具体而言,我们采用协同过滤模型提取用户与物品嵌入,通过协同权重生成器将其转化为具有低秩特性的协同权重。随后将这些协同权重合并到LLM的权重中,使得LLM无需微调或在提示中添加额外协同标记即可感知协同信号并生成个性化推荐。大量实验证实,CoRA能有效将协同信息整合到LLM中,显著提升推荐性能。