Understanding the nuances of a user's extensive interaction history is key to building accurate and personalized natural language systems that can adapt to evolving user preferences. To address this, we introduce PERSOMA, Personalized Soft Prompt Adapter architecture. Unlike previous personalized prompting methods for large language models, PERSOMA offers a novel approach to efficiently capture user history. It achieves this by resampling and compressing interactions as free form text into expressive soft prompt embeddings, building upon recent research utilizing embedding representations as input for LLMs. We rigorously validate our approach by evaluating various adapter architectures, first-stage sampling strategies, parameter-efficient tuning techniques like LoRA, and other personalization methods. Our results demonstrate PERSOMA's superior ability to handle large and complex user histories compared to existing embedding-based and text-prompt-based techniques.
翻译:理解用户广泛交互历史的细微差别是构建能够适应不断变化的用户偏好的准确且个性化的自然语言系统的关键。为此,我们提出了PERSOMA(个性化软提示适配器架构)。与以往针对大型语言模型的个性化提示方法不同,PERSOMA提供了一种新颖的方法来高效捕获用户历史。它通过将自由形式的文本交互进行重采样并压缩为富有表现力的软提示嵌入来实现这一点,该方法基于最近利用嵌入表示作为LLM输入的研究。我们通过评估各种适配器架构、第一阶段采样策略、参数高效调优技术(如LoRA)以及其他个性化方法,对我们的方法进行了严格验证。我们的结果表明,与现有的基于嵌入和基于文本提示的技术相比,PERSOMA在处理大规模复杂用户历史方面具有卓越的能力。