Recent Large Language Model (LLM) based AI can exhibit recognizable and measurable personality traits during conversations to improve user experience. However, as human understandings of their personality traits can be affected by their interaction partners' traits, a potential risk is that AI traits may shape and bias users' self-concept of their own traits. To explore the possibility, we conducted a randomized behavioral experiment. Our results indicate that after conversations about personal topics with an LLM-based AI chatbot using GPT-4o default personality traits, users' self-concepts aligned with the AI's measured personality traits. The longer the conversation, the greater the alignment. This alignment led to increased homogeneity in self-concepts among users. We also observed that the degree of self-concept alignment was positively associated with users' conversation enjoyment. Our findings uncover how AI personality traits can shape users' self-concepts through human-AI conversation, highlighting both risks and opportunities. We provide important design implications for developing more responsible and ethical AI systems.
翻译:近期基于大语言模型(LLM)的人工智能在对话中能够展现出可识别且可测量的人格特质,以提升用户体验。然而,由于人类对自身人格特质的理解可能受到互动对象特质的影响,一个潜在风险在于:AI特质可能塑造并偏倚用户对自身特质的自我概念。为探究这一可能性,我们开展了一项随机行为实验。研究结果表明,在与采用GPT-4o默认人格特质的LLM聊天机器人就个人话题进行对话后,用户的自我概念会向AI所测量的人格特质方向对齐。对话时长越长,对齐程度越高。这种对齐导致用户自我概念的同质性增强。我们还观察到,自我概念对齐程度与用户的对话愉悦感呈正相关。本研究揭示了AI人格特质如何通过人机对话塑造用户自我概念,同时指明了潜在风险与机遇。我们为开发更具责任感和伦理性的AI系统提供了重要的设计启示。