LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn't the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.
翻译:LGBTQ+个体越来越多地使用基于大语言模型(LLM)的聊天机器人来满足其心理健康需求。然而,鲜有研究探讨这些聊天机器人能否充分且安全地为这一人群提供定制化支持。我们访谈了18名LGBTQ+参与者和13名非LGBTQ+参与者,了解其使用基于LLM的聊天机器人满足心理健康需求的体验。LGBTQ+参与者依赖这些聊天机器人获取心理健康支持,这很可能是由于现实生活中缺乏相应支持。值得注意的是,尽管LLM能提供即时支持,但往往难以准确把握LGBTQ+特有挑战的细微之处。虽然对LLM进行微调以回应LGBTQ+需求是正确方向的一步,但这并非万能良方。更深层的问题根植于社会歧视。因此,我们呼吁未来研究者和设计者超越单纯的技术改进,倡导能直面并对抗困扰LGBTQ+群体的社会偏见的整体性策略。