Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, and what design opportunities they envision. We conducted 20 semi-structured interviews with people in the UK who live with mental health conditions and have used LLMs for mental health support. Through reflexive thematic analysis, we found that participants engaged with LLMs in conditional and situational ways: for immediacy, the desire for non-judgement, self-paced disclosure, cognitive reframing, and relational engagement. Simultaneously, participants articulated clear boundaries informed by prior therapeutic experience: LLMs were effective for mild-to-moderate distress but inadequate for crises, trauma, and complex social-emotional situations. We contribute empirical insights into the lived use of LLMs for mental health, highlight boundary-setting as central to their safe role, and propose design and governance directions for embedding them responsibly within care ecosystem.
翻译:大语言模型(LLMs)正日益被用于心理健康支持,然而对于面临心理健康挑战的人群如何与之互动、如何评估其效用以及他们设想何种设计可能性,目前仍知之甚少。我们对英国20名患有心理健康问题并曾使用LLMs寻求心理支持的个体进行了半结构化访谈。通过反思性主题分析,我们发现参与者以条件性和情境性的方式与LLMs互动:追求即时性、渴望无评判的回应、自主掌控的自我披露、认知重构以及关系性互动。与此同时,参与者基于既往治疗经验明确了使用边界:LLMs对轻至中度心理困扰有效,但在应对危机、创伤及复杂社会情感情境时存在不足。本研究通过实证视角揭示了LLMs在心理健康领域的实际使用情况,强调边界设定是其安全发挥作用的核心理念,并为将其负责任地融入照护生态系统提出了设计与治理方向。