The ability to provide trustworthy maternal health information using phone-based chatbots can have a significant impact, particularly in low-resource settings where users have low health literacy and limited access to care. However, deploying such systems is technically challenging: user queries are short, underspecified, and code-mixed across languages, answers require regional context-specific grounding, and partial or missing symptom context makes safe routing decisions difficult. We present a chatbot for maternal health in India developed through a partnership between academic researchers, a health tech company, a public health nonprofit, and a hospital. The system combines (1) stage-aware triage, routing high-risk queries to expert templates, (2) hybrid retrieval over curated maternal/newborn guidelines, and (3) evidence-conditioned generation from an LLM. Our core contribution is an evaluation workflow for high-stakes deployment under limited expert supervision. Targeting both component-level and end-to-end testing, we introduce: (i) a labeled triage benchmark (N=150) achieving 86.7% emergency recall, explicitly reporting the missed-emergency vs. over-escalation trade-off; (ii) a synthetic multi-evidence retrieval benchmark (N=100) with chunk-level evidence labels; (iii) LLM-as-judge comparison on real queries (N=781) using clinician-codesigned criteria; and (iv) expert validation. Our findings show that trustworthy medical assistants in multilingual, noisy settings require defense-in-depth design paired with multi-method evaluation, rather than any single model and evaluation method choice.
翻译:利用基于电话的聊天机器人提供可信赖的孕产妇健康信息的能力可能产生重大影响,尤其在资源匮乏的环境中,这些地区的用户健康素养较低且获得医疗服务的机会有限。然而,部署此类系统在技术上具有挑战性:用户查询简短、未充分说明且跨语言代码混合,回答需要基于区域特定背景,部分或缺失的症状信息使得安全分流决策变得困难。我们介绍一款为印度孕产妇健康开发的聊天机器人,该机器人由学术研究人员、一家健康科技公司、一个公共卫生非营利组织和一家医院合作开发。该系统结合了(1)阶段感知分流,将高风险查询路由至专家模板;(2)对精心整理的孕产妇/新生儿指南进行混合检索;以及(3)基于大型语言模型(LLM)的证据条件生成。我们的核心贡献是在有限专家监督下,为高风险部署场景设计了一套评估工作流程。针对组件级和端到端测试,我们引入了:(i)一个标注的分流基准(N=150),实现了86.7%的紧急情况召回率,明确报告了漏报紧急情况与过度升级之间的权衡;(ii)一个包含分块级证据标签的合成多证据检索基准(N=100);(iii)使用临床医生共同设计的标准,对真实查询(N=781)进行LLM作为评判者的比较;以及(iv)专家验证。我们的研究结果表明,在多语言、嘈杂环境中构建可信赖的医疗助手需要采用深度防御设计并结合多方法评估,而非依赖单一模型和评估方法的选择。