People are increasingly turning to generative AI (e.g., ChatGPT, Gemini, Copilot) for emotional support and companionship. While trust is likely to play a central role in enabling these informal and unsupervised interactions, we still lack an understanding of how people develop and experience it in this context. Seeking to fill this gap, we recruited 24 frequent users of generative AI for emotional support and conducted a qualitative study consisting of diary entries about interactions, transcripts of chats with AI, and in-depth interviews. Our results suggest important novel drivers of trust in this context: familiarity emerging from personalisation, nuanced mental models of generative AI, and awareness of people's control over conversations. Notably, generative AI's homogeneous use of personalised, positive, and persuasive language appears to promote some of these trust-building factors. However, this also seems to discourage other trust-related behaviours, such as remembering that generative AI is a machine trained to converse in human language. We present implications for future research that are likely to become critical as the use of generative AI for emotional support increasingly overlaps with therapeutic work.
翻译:人们越来越多地转向生成式AI(如ChatGPT、Gemini、Copilot)寻求情感支持与陪伴。尽管信任很可能在这些非正式且无监督的互动中扮演核心角色,但我们仍缺乏对人们在此情境下如何形成和体验信任的理解。为填补这一空白,我们招募了24位频繁使用生成式AI获取情感支持的用户,开展了一项质性研究,内容包括互动日记、与AI的聊天记录以及深度访谈。研究结果表明,在此情境下存在若干新颖且重要的信任驱动因素:个性化带来的熟悉感、对生成式AI的精细心智模型,以及人们对对话控制权的认知。值得注意的是,生成式AI对个性化、积极且具说服力语言的同质化使用,似乎促进了其中某些信任构建因素。然而,这同时也可能抑制其他与信任相关的行为,例如牢记生成式AI仅是经过训练以人类语言对话的机器。我们提出了对未来研究的启示,这些议题将随着生成式AI在情感支持领域与治疗性工作的日益重叠而变得至关重要。