The explainable AI (XAI) research community has proposed numerous technical methods, yet deploying explainability as systems remains challenging: Interactive explanation systems require both suitable algorithms and system capabilities that maintain explanation usability across repeated queries, evolving models and data, and governance constraints. We argue that operationalizing XAI requires treating explainability as an information systems problem where user interaction demands induce specific system requirements. We introduce X-SYS, a reference architecture for interactive explanation systems, that guides (X)AI researchers, developers and practitioners in connecting interactive explanation user interfaces (XUI) with system capabilities. X-SYS organizes around four quality attributes named STAR (scalability, traceability, responsiveness, and adaptability), and specifies a five-component decomposition (XUI Services, Explanation Services, Model Services, Data Services, Orchestration and Governance). It maps interaction patterns to system capabilities to decouple user interface evolution from backend computation. We implement X-SYS through SemanticLens, a system for semantic search and activation steering in vision-language models. SemanticLens demonstrates how contract-based service boundaries enable independent evolution, offline/online separation ensures responsiveness, and persistent state management supports traceability. Together, this work provides a reusable blueprint and concrete instantiation for interactive explanation systems supporting end-to-end design under operational constraints.
翻译:可解释人工智能(XAI)研究社区已提出众多技术方法,但将可解释性部署为系统仍具挑战性:交互式解释系统既需要合适的算法,也需要具备维持解释可用性的系统能力,以应对重复查询、模型与数据的动态演化以及治理约束。我们认为,实现XAI的落地应用需要将可解释性视为信息系统问题,其中用户交互需求会引致特定的系统要求。本文提出X-SYS——一种面向交互式解释系统的参考架构,旨在指导(X)AI研究人员、开发者和实践者将交互式解释用户界面(XUI)与系统能力相连接。X-SYS围绕名为STAR(可扩展性、可追溯性、响应性与适应性)的四大质量属性进行组织,并规定了由五个组件构成的分层结构(XUI服务、解释服务、模型服务、数据服务、编排与治理)。该架构将交互模式映射至系统能力,从而解耦用户界面的演进与后端计算。我们通过SemanticLens系统实现了X-SYS,该系统专为视觉语言模型中的语义搜索与激活导向而设计。SemanticLens展示了基于契约的服务边界如何支持独立演进、离线/在线分离如何保障响应性,以及持久化状态管理如何实现可追溯性。本工作共同为在运行约束下支持端到端设计的交互式解释系统提供了可复用的蓝图与具体实例。