This article presents the design of an open-API-based explainable AI (XAI) service to provide feature contribution explanations for cloud AI services. Cloud AI services are widely used to develop domain-specific applications with precise learning metrics. However, the underlying cloud AI services remain opaque on how the model produces the prediction. We argue that XAI operations are accessible as open APIs to enable the consolidation of the XAI operations into the cloud AI services assessment. We propose a design using a microservice architecture that offers feature contribution explanations for cloud AI services without unfolding the network structure of the cloud models. We can also utilize this architecture to evaluate the model performance and XAI consistency metrics showing cloud AI services trustworthiness. We collect provenance data from operational pipelines to enable reproducibility within the XAI service. Furthermore, we present the discovery scenarios for the experimental tests regarding model performance and XAI consistency metrics for the leading cloud vision AI services. The results confirm that the architecture, based on open APIs, is cloud-agnostic. Additionally, data augmentations result in measurable improvements in XAI consistency metrics for cloud AI services.
翻译:本文提出了一种基于开放API的可解释人工智能服务设计,旨在为云AI服务提供特征贡献解释。云AI服务凭借其精确的学习指标,被广泛用于开发特定领域的应用程序。然而,底层云AI服务在模型如何生成预测方面仍不透明。我们认为,可解释人工智能操作应通过开放API提供,以便将可解释人工智能操作整合到云AI服务评估中。我们提出了一种微服务架构设计,该设计能够为云AI服务提供特征贡献解释,而无需公开云模型的网络结构。我们还可以利用该架构评估模型性能和可解释人工智能一致性指标,从而展示云AI服务的可信度。我们从操作流水线中收集溯源数据,以确保可解释人工智能服务内的可复现性。此外,我们针对主流云视觉AI服务,提出了关于模型性能和可解释人工智能一致性指标的实验测试发现场景。结果证实,该基于开放API的架构具有云平台无关性。此外,数据增强技术可显著提升云AI服务的可解释人工智能一致性指标。