People experiencing mental health crises frequently turn to open-ended generative AI (GenAI) chatbots such as ChatGPT for support. However, rather than providing immediate assistance, most GenAI chatbots are designed to respond to crisis situations in ways that minimize their developers' liability, primarily through avoidance (e.g., refusing to engage beyond templated referrals to crisis hotlines). Withholding crisis support in these cases may harm users who have no viable alternatives and reduce their motivation to seek further help. At scale, this avoidant design could undermine population mental health. We propose empowerment-oriented design principles for AI crisis support, informed by community helper models. As an initial touchpoint in help-seeking, AI chatbots can act as a supportive bridge to de-escalate crises and connect users to more reliable care. Coordination between AI developers and regulators can enable a better balance of risk mitigation and user empowerment in AI crisis support.
翻译:经历心理健康危机的人们经常向开放式生成式AI(GenAI)聊天机器人(如ChatGPT)寻求支持。然而,大多数GenAI聊天机器人在应对危机情境时,其设计目标并非提供即时援助,而是通过回避策略(例如拒绝深入交流,仅提供模板化的危机热线转介)来最小化开发者的法律责任。在此类情况下,拒绝提供危机支持可能对缺乏可行替代方案的用户造成伤害,并降低其寻求进一步帮助的动机。从宏观层面看,这种规避性设计可能损害群体心理健康。我们借鉴社区援助模型,提出了面向赋能的人工智能危机支持设计原则。作为求助过程中的初始接触点,AI聊天机器人可以充当支持性桥梁,帮助缓解危机并将用户与更可靠的照护资源相连接。AI开发者与监管机构之间的协调合作,能够在AI危机支持中实现风险缓解与用户赋能之间更优的平衡。