AI companions enable deep emotional relationships by engaging a user's sense of identity, but they also pose risks like unhealthy emotional dependence. Mitigating these risks requires first understanding the underlying process of identity construction and negotiation with AI companions. Focusing on Character.AI (C.AI), a popular AI companion, we conducted an LLM-assisted thematic analysis of 22,374 online discussions on its subreddit. Using Identity Negotiation Theory as an analytical lens, we identified a three-stage process: 1) five user motivations; 2) an identity negotiation process involving three communication expectations and four identity co-construction strategies; and 3) three emotional outcomes. Our findings surface the identity work users perform as both performers and directors to co-construct identities in negotiation with C.AI. This process takes place within a socio-emotional sandbox where users can experiment with social roles and express emotions without non-human partners. Finally, we offer design implications for emotionally supporting users while mitigating the risks.
翻译:AI伴侣通过调动用户的身份感知能力,能够建立深层情感关系,但也可能引发不健康的情感依赖等风险。要缓解这些风险,首先需要理解用户与AI伴侣之间身份建构与协商的内在过程。本研究以热门AI伴侣平台Character.AI(C.AI)为对象,对其子论坛中的22,374条在线讨论进行了大语言模型辅助的主题分析。借助身份协商理论作为分析框架,我们识别出一个三阶段过程:1)五种用户动机;2)包含三种沟通预期与四种身份共建策略的身份协商过程;3)三种情感结果。研究发现揭示了用户在作为表演者与导演的双重角色中,通过与C.AI协商共同建构身份所进行的身份工作。这一过程发生在一个社会情感沙箱中,用户可在其中尝试社会角色并表达情感,而无需面对人类伙伴的评判。最后,我们提出了在情感支持用户的同时降低风险的设计启示。