To improve the trustworthiness of an AI model, finding consistent, understandable representations of its inference process is essential. This understanding is particularly important in high-stakes operations such as weather forecasting, where the identification of underlying meteorological mechanisms is as critical as the accuracy of the predictions. Despite the growing literature that addresses this issue through explainable AI, the applicability of their solutions is often limited due to their AI-centric development. To fill this gap, we follow a user-centric process to develop an example-based concept analysis framework, which identifies cases that follow a similar inference process as the target instance in a target model and presents them in a user-comprehensible format. Our framework provides the users with visually and conceptually analogous examples, including the probability of concept assignment to resolve ambiguities in weather mechanisms. To bridge the gap between vector representations identified from models and human-understandable explanations, we compile a human-annotated concept dataset and implement a user interface to assist domain experts involved in the the framework development.
翻译:为提高人工智能模型的可信度,寻找其推理过程的一致且可理解的表示至关重要。这种理解在天气预报等高风险应用中尤为重要,因为识别潜在气象机制与预测准确性同等重要。尽管现有可解释人工智能研究通过多种方法解决该问题,但其解决方案常因以AI为中心的开发模式而适用性受限。为填补这一空白,我们采用以用户为中心的流程,开发了基于示例的概念分析框架。该框架能够识别目标模型中与目标实例遵循相似推理过程的案例,并以用户可理解的格式呈现。我们的框架为用户提供视觉和概念上可类比的示例,包括概念分配概率以解决气象机制中的模糊性问题。为弥合模型识别出的向量表示与人类可理解解释之间的差距,我们构建了人工标注的概念数据集,并开发了用户界面以协助领域专家参与框架开发。