Ensuring transparency and trust in artificial intelligence (AI) models is essential, particularly as they are increasingly applied in safety-critical and high-stakes domains. Explainable AI (XAI) has emerged as a promising approach to address this challenge, yet the rigorous evaluation of XAI methods remains crucial for optimizing the trade-offs between model complexity, predictive performance, and interpretability. While extensive progress has been achieved in evaluating XAI techniques for classification tasks, evaluation strategies tailored to semantic segmentation remain relatively underexplored. This work introduces a comprehensive and systematic evaluation framework specifically designed for assessing XAI in semantic segmentation, explicitly accounting for both spatial and contextual task complexities. The framework employs pixel-level evaluation strategies and carefully designed metrics to provide fine-grained interpretability insights. Simulation results using recently adapted class activation mapping (CAM)-based XAI schemes demonstrate the efficiency, robustness, and reliability of the proposed methodology. These findings contribute to advancing transparent, trustworthy, and accountable semantic segmentation models.
翻译:确保人工智能(AI)模型的透明度和可信度至关重要,尤其是在其日益应用于安全关键和高风险领域的情况下。可解释人工智能(XAI)已成为应对这一挑战的有前景的方法,然而,对XAI方法进行严格评估对于优化模型复杂性、预测性能和可解释性之间的权衡仍然至关重要。尽管在分类任务的XAI技术评估方面已取得广泛进展,但针对语义分割的评估策略仍相对缺乏深入探索。本研究引入了一个全面且系统的评估框架,专门用于评估语义分割中的XAI,明确考虑了空间和上下文任务复杂性。该框架采用像素级评估策略和精心设计的指标,以提供细粒度的可解释性洞察。使用近期改进的基于类激活映射(CAM)的XAI方案进行的仿真结果证明了所提方法的高效性、鲁棒性和可靠性。这些发现有助于推动透明、可信且可问责的语义分割模型的发展。