Large language models (LLMs) increasingly serve as the central control unit of AI agents, yet current approaches remain limited in their ability to deliver personalized interactions. While Retrieval Augmented Generation enhances LLM capabilities by improving context-awareness, it lacks mechanisms to combine contextual information with user-specific data. Although personalization has been studied in fields such as human-computer interaction or cognitive science, existing perspectives largely remain conceptual, with limited focus on technical implementation. To address these gaps, we build on a unified definition of personalization as a conceptual foundation to derive technical requirements for adaptive, user-centered LLM-based agents. Combined with established agentic AI patterns such as multi-agent collaboration or multi-source retrieval, we present a framework that integrates persistent memory, dynamic coordination, self-validation, and evolving user profiles to enable personalized long-term interactions. We evaluate our approach on three public datasets using metrics such as retrieval accuracy, response correctness, or BertScore. We complement these results with a five-day pilot user study providing initial insights into user feedback on perceived personalization. The study provides early indications that guide future work and highlights the potential of integrating persistent memory and user profiles to improve the adaptivity and perceived personalization of LLM-based agents.
翻译:大型语言模型日益成为AI智能体的核心控制单元,但现有方法在实现个性化交互方面仍存在局限。尽管检索增强生成技术通过提升上下文感知能力增强了LLM的性能,但其缺乏将上下文信息与用户特定数据相结合的机制。虽然个性化在人机交互或认知科学等领域已有研究,但现有观点大多停留在概念层面,对技术实现的关注有限。为弥补这些不足,我们以统一的个性化定义作为概念基础,推导出以用户为中心的自适应LLM智能体的技术要求。结合多智能体协作、多源检索等成熟的智能体AI模式,我们提出了一个集成持久记忆、动态协调、自我验证与动态演化的用户画像的框架,以实现个性化长期交互。我们在三个公开数据集上使用检索准确率、响应正确率和BertScore等指标评估了该方法,并辅以为期五天的试点用户研究,初步获取了用户对感知个性化的反馈。该研究为未来工作提供了早期指引,并凸显了集成持久记忆与用户画像在提升LLM智能体适应性与感知个性化方面的潜力。