Large language models (LLMs) are increasingly used for mental health support, yet they can produce responses that are overly directive, inconsistent, or clinically misaligned, particularly in sensitive or high-risk contexts. Existing approaches to mitigating these risks largely rely on implicit alignment through training or prompting, offering limited transparency and runtime accountability. We introduce PAIR-SAFE, a paired-agent framework for auditing and refining AI-generated mental health support that integrates a Responder agent with a supervisory Judge agent grounded in the clinically validated Motivational Interviewing Treatment Integrity (MITI-4) framework. The Judgeaudits each response and provides structuredALLOW or REVISE decisions that guide runtime response refinement. We simulate counseling interactions using a support-seeker simulator derived from human-annotated motivational interviewing data. We find that Judge-supervised interactions show significant improvements in key MITI dimensions, including Partnership, Seek Collaboration, and overall Relational quality. Our quantitative findings are supported by qualitative expert evaluation, which further highlights the nuances of runtime supervision. Together, our results reveal that such pairedagent approach can provide clinically grounded auditing and refinement for AI-assisted conversational mental health support.
翻译:大型语言模型(LLMs)正日益应用于心理健康支持领域,但其生成的回应可能过于指令化、前后不一致或与临床实践不符,尤其在敏感或高风险情境中。现有的风险缓解方法主要依赖于通过训练或提示实现的隐式对齐,其透明度和运行时问责能力有限。本文提出PAIR-SAFE——一种用于审计与精炼AI生成心理健康支持的双智能体框架,该框架将回应生成智能体与基于临床验证的动机性访谈治疗完整性(MITI-4)框架的监督型法官智能体相结合。法官智能体对每次回应进行审计,并提供结构化的“允许”或“修订”决策,以指导运行时的回应精炼。我们使用源自人工标注动机性访谈数据的求助者模拟器来模拟咨询对话。研究发现,在法官监督下的对话在关键MITI维度(包括伙伴关系、寻求协作及整体关系质量)上均表现出显著提升。定量研究结果得到了定性专家评估的支持,后者进一步揭示了运行时监督的细微差别。综合结果表明,此类双智能体方法能为AI辅助的对话式心理健康支持提供基于临床依据的审计与精炼机制。