Online, people often recount their experiences turning to conversational AI agents (e.g., ChatGPT, Claude, Copilot) for mental health support -- going so far as to replace their therapists. These anecdotes suggest that AI agents have great potential to offer accessible mental health support. However, it's unclear how to meet this potential in extreme mental health crisis use cases. In this work, we explore the first-person experience of turning to a conversational AI agent in a mental health crisis. From a testimonial survey (n = 53) of lived experiences, we find that people use AI agents to fill the in-between spaces of human support; they turn to AI due to lack of access to mental health professionals or fears of burdening others. At the same time, our interviews with mental health experts (n = 16) suggest that human-human connection is an essential positive action when managing a mental health crisis. Using the stages of change model, our results suggest that a responsible AI crisis intervention is one that increases the user's preparedness to take a positive action while de-escalating any intended negative action. We discuss the implications of designing conversational AI agents as bridges towards human-human connection rather than ends in themselves.
翻译:在线环境中,人们经常讲述他们向对话式人工智能代理(如ChatGPT、Claude、Copilot)寻求心理健康支持的经历——甚至达到取代治疗师的程度。这些轶事表明,人工智能代理在提供可及的心理健康支持方面具有巨大潜力。然而,在极端心理健康危机使用场景中,如何实现这种潜力尚不明确。本研究探讨了在心理健康危机中向对话式人工智能代理求助的第一人称体验。通过对生活经历的证言调查(n = 53),我们发现人们使用人工智能代理填补人际支持的间隙空间;他们转向人工智能是由于缺乏心理健康专业人员资源,或担心给他人造成负担。与此同时,我们对心理健康专家(n = 16)的访谈表明,人际联系是处理心理健康危机时至关重要的积极行动。运用改变阶段模型,我们的研究结果表明,负责任的人工智能危机干预应当提升用户采取积极行动的准备度,同时缓解任何预期的消极行动。我们讨论了将对话式人工智能代理设计为通往人际联系的桥梁而非最终目的的设计意义。