Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, and what design opportunities they envision. We conducted 20 semi-structured interviews with people in the UK who live with mental health conditions and have used LLMs for mental health support. Through reflexive thematic analysis, we found that participants engaged with LLMs in conditional and situational ways: for immediacy, the desire for non-judgement, self-paced disclosure, cognitive reframing, and relational engagement. Simultaneously, participants articulated clear boundaries informed by prior therapeutic experience: LLMs were effective for mild-to-moderate distress but inadequate for crises, trauma, and complex social-emotional situations. We contribute empirical insights into the lived use of LLMs for mental health, highlight boundary-setting as central to their safe role, and propose design and governance directions for embedding them responsibly within care ecosystem.
翻译:大语言模型(LLMs)正日益被用于心理健康支持,然而对于面临心理健康挑战的人群如何与之互动、如何评估其效用,以及他们设想了哪些设计可能性,目前仍知之甚少。我们对英国20名患有心理健康问题并曾使用LLMs寻求心理支持的个体进行了半结构化访谈。通过反思性主题分析,我们发现参与者以条件性和情境性的方式与LLMs互动:寻求即时性、渴望无评判的回应、进行自我节奏的自我表露、实现认知重构以及建立关系性联结。与此同时,参与者基于既往治疗经验明确了使用边界:LLMs对轻至中度心理困扰有效,但在处理危机、创伤及复杂的社会情感情境时则显不足。本研究为LLMs在心理健康领域的实际使用提供了实证见解,强调边界设定是其安全发挥作用的核心,并提出了将其负责任地嵌入照护生态系统的设计与治理方向。