The opacity of AI models necessitates both validation and evaluation before their integration into services. To investigate these models, explainable AI (XAI) employs methods that elucidate the relationship between input features and output predictions. The operations of XAI extend beyond the execution of a single algorithm, involving a series of activities that include preprocessing data, adjusting XAI to align with model parameters, invoking the model to generate predictions, and summarizing the XAI results. Adversarial attacks are well-known threats that aim to mislead AI models. The assessment complexity, especially for XAI, increases when open-source AI models are subject to adversarial attacks, due to various combinations. To automate the numerous entities and tasks involved in XAI-based assessments, we propose a cloud-based service framework that encapsulates computing components as microservices and organizes assessment tasks into pipelines. The current XAI tools are not inherently service-oriented. This framework also integrates open XAI tool libraries as part of the pipeline composition. We demonstrate the application of XAI services for assessing five quality attributes of AI models: (1) computational cost, (2) performance, (3) robustness, (4) explanation deviation, and (5) explanation resilience across computer vision and tabular cases. The service framework generates aggregated analysis that showcases the quality attributes for more than a hundred combination scenarios.
翻译:人工智能模型的不透明性要求在其集成到服务之前进行验证和评估。为研究这些模型,可解释人工智能采用阐明输入特征与输出预测之间关系的方法。XAI的操作不仅限于执行单一算法,还涉及一系列活动,包括数据预处理、调整XAI以匹配模型参数、调用模型生成预测以及总结XAI结果。对抗攻击是旨在误导AI模型的已知威胁。当开源AI模型遭受对抗攻击时,由于各种组合情况,评估复杂性(特别是对于XAI)会显著增加。为自动化基于XAI评估中涉及的众多实体和任务,我们提出一种基于云的服务框架,该框架将计算组件封装为微服务,并将评估任务组织成流水线。当前XAI工具本质上并非面向服务。该框架还将开放XAI工具库集成作为流水线组合的一部分。我们演示了XAI服务在评估AI模型五个质量属性方面的应用:(1)计算成本,(2)性能,(3)鲁棒性,(4)解释偏差,以及(5)在计算机视觉和表格案例中的解释弹性。该服务框架生成的聚合分析展示了超过百种组合场景下的质量属性。