Large Language Models (LLMs) have become powerful foundations for generative recommender systems, framing recommendation tasks as text generation tasks. However, existing generative recommendation methods often rely on discrete ID-based prompts or task-specific soft prompts, which overlook the valuable collaborative signals shared among users with similar interests. To address this limitation, this paper presents a compositional framework that integrates a user's individual preferences with collective preferences from similar users to build personalized soft prompts. Specifically, an attention-based mechanism fuses embeddings from users with similar interests, creating a richer representation that captures multiple facets of user preferences. This design dynamically emphasizes shared interests while preserving individual user preferences. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approach across sequential recommendation, top-n recommendation, and explanation generation tasks, underscoring the advantages of incorporating collaborative signals through an attention-based compositional strategy.
翻译:暂无翻译