People experiencing mental health crises frequently turn to open-ended generative AI (GenAI) chatbots for support. However, rather than providing immediate assistance, some GenAI chatbots are designed to respond to crisis situations in ways that minimize their developers' liability, primarily through avoidance (e.g., refusing to engage beyond templated referrals to crisis hotlines). Withholding crisis support in these cases may harm users who have no viable alternatives and reduce their motivation to seek further help. At scale, this avoidant design could undermine population mental health. We propose empowerment-oriented design principles for AI crisis support, informed by community helper models. As an initial touchpoint in help-seeking, AI chatbots can act as a supportive bridge to de-escalate crises and connect users to more reliable care. Coordination between AI developers and regulators can enable a better balance of risk mitigation and user empowerment in AI crisis support.
翻译:经历心理健康危机的个体常常转向开放式生成式人工智能(GenAI)聊天机器人寻求支持。然而,部分GenAI聊天机器人在应对危机情境时,其设计目标并非提供即时援助,而是通过规避策略(例如,拒绝互动并仅提供模板化的危机热线转介)来最小化开发者的法律责任。在此类情况下,拒绝提供危机支持可能对缺乏可行替代方案的用户造成伤害,并削弱其寻求进一步帮助的动机。从宏观层面看,这种规避性设计可能损害群体心理健康水平。基于社区援助模型的理论框架,我们提出面向赋权的人工智能危机支持设计原则。作为求助过程的初始接触点,AI聊天机器人可充当支持性桥梁,通过缓解危机状态并引导用户获取更可靠的照护资源。人工智能开发者与监管机构之间的协同合作,有望在风险控制与用户赋权之间实现更优平衡,从而提升AI危机支持系统的整体效能。