The rapid advancement of Large Language Models (LLMs), reasoning models, and agentic AI approaches coincides with a growing global mental health crisis, where increasing demand has not translated into adequate access to professional support, particularly for underserved populations. This presents a unique opportunity for AI to complement human-led interventions, offering scalable and context-aware support while preserving human connection in this sensitive domain. We explore various AI applications in peer support, self-help interventions, proactive monitoring, and data-driven insights, using a human-centred approach that ensures AI supports rather than replaces human interaction. However, AI deployment in mental health fields presents challenges such as ethical concerns, transparency, privacy risks, and risks of over-reliance. We propose a hybrid ecosystem where where AI assists but does not replace human providers, emphasising responsible deployment and evaluation. We also present some of our early work and findings in several of these AI applications. Finally, we outline future research directions for refining AI-enhanced interventions while adhering to ethical and culturally sensitive guidelines.
翻译:大型语言模型(LLM)、推理模型与智能体AI方法的快速发展,恰逢全球心理健康危机日益严峻——日益增长的需求并未转化为足够的专业支持可及性,对服务不足的群体尤为如此。这为AI提供了独特的机遇,使其能够补充以人为核心的干预措施,在保留这一敏感领域中人际联结的同时,提供可扩展且情境感知的支持。我们采用以人为本的研究路径,探讨AI在同伴支持、自助干预、主动监测与数据驱动洞察等多方面的应用,确保AI技术辅助而非取代人际互动。然而,AI在心理健康领域的部署仍面临伦理隐忧、透明度缺失、隐私风险及过度依赖等挑战。我们提出一种混合生态系统,其中AI辅助而非替代人类服务提供者,并强调负责任的技术部署与评估。本文同时展示了我们在若干AI应用中的早期工作与发现。最后,我们展望了未来研究方向,旨在遵循伦理与文化敏感性准则的前提下,进一步完善AI增强型干预体系。