Machine learning (ML) is becoming increasingly popular in meteorological decision-making. Although the literature on explainable artificial intelligence (XAI) is growing steadily, user-centered XAI studies have not extend to this domain yet. This study defines three requirements for explanations of black-box models in meteorology through user studies: statistical model performance for different rainfall scenarios to identify model bias, model reasoning, and the confidence of model outputs. Appropriate XAI methods are mapped to each requirement, and the generated explanations are tested quantitatively and qualitatively. An XAI interface system is designed based on user feedback. The results indicate that the explanations increase decision utility and user trust. Users prefer intuitive explanations over those based on XAI algorithms even for potentially easy-to-recognize examples. These findings can provide evidence for future research on user-centered XAI algorithms, as well as a basis to improve the usability of AI systems in practice.
翻译:机器学习在气象决策中的应用日益广泛。尽管可解释人工智能领域的文献持续增长,但以用户为中心的可解释人工智能研究尚未延伸至该领域。本研究通过用户调研,界定了气象学中黑盒模型解释需满足的三项要求:针对不同降雨场景的统计模型性能(以识别模型偏差)、模型推理过程以及模型输出的置信度。研究将适宜的可解释人工智能方法与各项要求相匹配,并对生成的解释进行了定量与定性测试。基于用户反馈,设计了一套可解释人工智能界面系统。结果表明,所提供的解释能提升决策效用与用户信任度。即使面对可能易于识别的案例,用户仍倾向于直观的解释而非基于可解释人工智能算法的解释。这些发现可为未来以用户为中心的可解释人工智能算法研究提供依据,并为提升实际人工智能系统的可用性奠定基础。