As large language models (LLMs)-enhanced chatbots grow increasingly expressive and socially responsive, many users are beginning to form companionship-like bonds with them, particularly with simulated AI partners designed to mimic emotionally attuned interlocutors. These emerging AI companions raise critical questions: Can such systems fulfill social needs typically met by human relationships? How do they shape psychological well-being? And what new risks arise as users develop emotional ties to non-human agents? This study investigates how people interact with AI companions, especially simulated partners on CharacterAI, and how this use is associated with users' psychological well-being. We analyzed survey data from 1,131 users and 4,363 chat sessions (413,509 messages) donated by 244 participants, focusing on three dimensions of use: nature of the interaction, interaction intensity, and self-disclosure. By triangulating self-reports primary motivation, open-ended relationship descriptions, and annotated chat transcripts, we identify patterns in how users engage with AI companions and its associations with well-being. Findings suggest that people with smaller social networks are more likely to turn to chatbots for companionship, but that companionship-oriented chatbot usage is consistently associated with lower well-being, particularly when people use the chatbots more intensively, engage in higher levels of self-disclosure, and lack strong human social support. Even though some people turn to chatbots to fulfill social needs, these uses of chatbots do not fully substitute for human connection. As a result, the psychological benefits may be limited, and the relationship could pose risks for more socially isolated or emotionally vulnerable users.
翻译:随着基于大语言模型(LLMs)的聊天机器人日益表现出更强的表达能力和社交响应性,许多用户开始与它们建立类伴侣式的情感联结,尤其是那些旨在模拟情感共鸣对话者的拟人化AI伴侣。这些新兴的AI伴侣引发了关键问题:此类系统能否满足通常由人际关系实现的社会需求?它们如何影响用户的心理健康?当用户与非人类智能体建立情感纽带时,会产生哪些新的风险?本研究通过调查CharacterAI等平台的拟人化AI伴侣使用情况,探究人机交互模式及其与心理健康的关联。我们分析了1,131名用户的问卷数据,以及244名参与者提供的4,363个对话会话(共计413,509条消息),聚焦于三个使用维度:交互性质、交互强度和自我表露程度。通过综合用户自陈的主要动机、开放式关系描述和标注后的对话文本,我们揭示了用户与AI伴侣的互动模式及其与心理健康的关联。研究发现:社交网络较小的个体更倾向于寻求聊天机器人作为陪伴,但以陪伴为导向的聊天机器人使用始终与较低的心理健康水平相关,尤其是在用户使用强度更高、自我表露程度更深且缺乏现实社会支持的情况下。尽管部分用户试图通过聊天机器人满足社交需求,但这种使用并不能完全替代真实的人际联结。因此,其带来的心理效益可能有限,且这种关系可能对社交隔离或情感脆弱的用户构成潜在风险。