Large language models (LLMs) are already being piloted for clinical use in hospital systems like NYU Langone, Dana-Farber and the NHS. A proposed deployment use case is psychotherapy, where a LLM-powered chatbot can treat a patient undergoing a mental health crisis. Deployment of LLMs for mental health response could hypothetically broaden access to psychotherapy and provide new possibilities for personalizing care. However, recent high-profile failures, like damaging dieting advice offered by the Tessa chatbot to patients with eating disorders, have led to doubt about their reliability in high-stakes and safety-critical settings. In this work, we develop an evaluation framework for determining whether LLM response is a viable and ethical path forward for the automation of mental health treatment. Using human evaluation with trained clinicians and automatic quality-of-care metrics grounded in psychology research, we compare the responses provided by peer-to-peer responders to those provided by a state-of-the-art LLM. We show that LLMs like GPT-4 use implicit and explicit cues to infer patient demographics like race. We then show that there are statistically significant discrepancies between patient subgroups: Responses to Black posters consistently have lower empathy than for any other demographic group (2%-13% lower than the control group). Promisingly, we do find that the manner in which responses are generated significantly impacts the quality of the response. We conclude by proposing safety guidelines for the potential deployment of LLMs for mental health response.
翻译:大语言模型已开始在纽约大学朗戈尼医学中心、丹娜-法伯癌症研究所及英国国家医疗服务体系等医院系统中进行临床试点。其拟议应用场景之一是心理治疗,即由大语言模型驱动的聊天机器人可治疗处于心理健康危机中的患者。在心理健康应对中部署大语言模型理论上可拓宽心理治疗的可及性,并为个性化护理提供新可能。然而近期高调失败案例——例如Tessa聊天机器人向饮食障碍患者提供的损害性节食建议——引发了对该类模型在高风险及安全关键场景中可靠性的质疑。本研究构建了一个评估框架,用于判定大语言模型回应在自动化心理健康治疗中的可行性与伦理路径。通过运用经培训临床医生的人工评估及基于心理学研究的自动护理质量指标,我们比较了同伴支持回应者与先进大语言模型提供的回应。研究表明,GPT-4等大语言模型会利用隐性和显性线索推断种族等患者人口统计学特征。随后我们证实不同患者亚组之间存在统计学显著差异:针对黑人发帖者的回应在共情水平上始终低于其他任何人口统计组(比对照组低2%-13%)。令人鼓舞的是,我们发现回应的生成方式显著影响其质量。最后,我们为潜在部署心理健康应对型大语言模型提出了安全准则。