Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants prompted to take on specific personality traits and opinions. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI user preference, while underscoring the need for a more grounded discussion of the mechanisms and risks of AI personalization.
翻译:随着人工智能助手日益针对个体用户进行个性化定制,其动态化与机器学习驱动的特性使得我们对个性化如何影响交互结果与用户感知的理解仍显不足。本研究通过一项大规模对照实验,让1000名参与者与经过提示而具备特定个性特征及观点立场的人工智能助手进行交互。结果显示,参与者普遍更倾向于与自身观点一致的模型进行交互。参与者认为观点对齐的模型更具可信度、能力、亲和力与说服力,这验证了"人工智能相似性吸引"假说。相比之下,人工智能个性对齐的影响微弱甚至不存在,其中内向型参与者反而认为内向型模型的可信度与能力较低。这些发现表明观点对齐是影响人工智能用户偏好的核心维度,同时强调需要就人工智能个性化的机制与风险展开更扎实的讨论。