The integration of Artificial Intelligence (AI) into clinical settings presents a software engineering challenge, demanding a shift from isolated models to robust, governable, and reliable systems. However, brittle, prototype-derived architectures often plague industrial applications and a lack of systemic oversight, creating a ``responsibility vacuum'' where safety and accountability are compromised. This paper presents an industry case study of the ``Maria'' platform, a production-grade AI system in primary healthcare that addresses this gap. Our central hypothesis is that trustworthy clinical AI is achieved through the holistic integration of four foundational engineering pillars. We present a synergistic architecture that combines Clean Architecture for maintainability with an Event-driven architecture for resilience and auditability. We introduce the Agent as the primary unit of modularity, each possessing its own autonomous MLOps lifecycle. Finally, we show how a Human-in-the-Loop governance model is technically integrated not merely as a safety check, but as a critical, event-driven data source for continuous improvement. We present the platform as a reference architecture, offering practical lessons for engineers building maintainable, scalable, and accountable AI-enabled systems in high-stakes domains.
翻译:将人工智能(AI)集成到临床环境中是一项软件工程挑战,要求从孤立的模型转向健壮、可治理且可靠的系统。然而,工业应用常受困于源自原型的脆弱架构以及系统性监督的缺失,从而形成一种“责任真空”,导致安全性与可问责性受损。本文介绍了“Maria”平台的行业案例研究,这是一个应用于初级保健领域、旨在弥合此差距的生产级AI系统。我们的核心假设是,可信赖的临床AI需要通过四个基础工程支柱的整体集成来实现。我们提出了一种协同架构,将用于可维护性的整洁架构与用于韧性和可审计性的事件驱动架构相结合。我们引入智能体作为模块化的基本单元,每个智能体都拥有其自主的MLOps生命周期。最后,我们展示了人在回路的治理模型如何在技术上被集成——不仅仅是作为安全检查,更是作为持续改进的关键、事件驱动的数据来源。我们将该平台作为一种参考架构,为工程师在高风险领域构建可维护、可扩展且可问责的AI赋能系统提供实践启示。