With the increasing prevalence of mental health conditions worldwide, AI-powered chatbots and conversational agents have emerged as accessible tools to support mental health. However, deploying Large Language Models (LLMs) in mental healthcare applications raises significant privacy concerns, especially regarding regulations like HIPAA and GDPR. In this work, we propose FedMentalCare, a privacy-preserving framework that leverages Federated Learning (FL) combined with Low-Rank Adaptation (LoRA) to fine-tune LLMs for mental health analysis. We investigate the performance impact of varying client data volumes and model architectures (e.g., MobileBERT and MiniLM) in FL environments. Our framework demonstrates a scalable, privacy-aware approach for deploying LLMs in real-world mental healthcare scenarios, addressing data security and computational efficiency challenges.
翻译:随着全球心理健康问题日益普遍,基于人工智能的聊天机器人和对话代理已成为支持心理健康的重要工具。然而,在心理健康护理应用中部署大语言模型(LLMs)引发了严重的隐私担忧,尤其是在HIPAA和GDPR等法规框架下。本研究提出FedMentalCare,一种隐私保护框架,该框架结合联邦学习(FL)与低秩自适应(LoRA)技术,用于微调LLMs以进行心理健康分析。我们探究了联邦学习环境中不同客户端数据量及模型架构(如MobileBERT和MiniLM)对性能的影响。该框架展示了一种可扩展、具备隐私意识的方法,用于在实际心理健康护理场景中部署LLMs,同时应对数据安全与计算效率的挑战。