AI companions enable deep emotional relationships by engaging a user's sense of identity, but they also pose risks like unhealthy emotional dependence. Mitigating these risks requires first understanding the underlying process of identity construction and negotiation with AI companions. Focusing on Character.AI (C.AI), a popular AI companion, we conducted an LLM-assisted thematic analysis of 22,374 online discussions on its subreddit. Using Identity Negotiation Theory as an analytical lens, we identified a three-stage process: 1) five user motivations; 2) an identity negotiation process involving three communication expectations and four identity co-construction strategies; and 3) three emotional outcomes. Our findings surface the identity work users perform as both performers and directors to co-construct identities in negotiation with C.AI. This process takes place within a socio-emotional sandbox where users can experiment with social roles and express emotions without non-human partners. Finally, we offer design implications for emotionally supporting users while mitigating the risks.
翻译:AI伴侣通过参与用户的身份感来促成深刻的情感关系,但也带来了如不健康情感依赖等风险。降低这些风险需要首先理解与AI伴侣进行身份构建和协商的基本过程。本研究聚焦于流行的AI伴侣Character.AI(C.AI),对其子论坛上22,374条在线讨论进行了基于大语言模型(LLM)辅助的主题分析。以身份协商理论为分析框架,我们识别出一个三阶段过程:1)五种用户动机;2)一个涉及三种沟通期望和四种身份共建策略的身份协商过程;3)三种情感结果。我们的发现揭示了用户在作为表演者和导演的双重角色中,与C.AI进行协商以共建身份所进行的身份工作。这一过程发生在一个社会情感沙盒中,用户可以在其中尝试社会角色,并在没有非人类伴侣的情况下表达情感。最后,我们提出了在减轻风险的同时为用户提供情感支持的设计启示。