Deep learning models are widely used in critical applications, highlighting the need for pre-deployment model understanding and improvement. Visual concept-based methods, while increasingly used for this purpose, face challenges: (1) most concepts lack interpretability, (2) existing methods require model knowledge, often unavailable at run time. Additionally, (3) there lacks a no-code method for post-understanding model improvement. Addressing these, we present InterVLS. The system facilitates model understanding by discovering text-aligned concepts, measuring their influence with model-agnostic linear surrogates. Employing visual analytics, InterVLS offers concept-based explanations and performance insights. It enables users to adjust concept influences to update a model, facilitating no-code model improvement. We evaluate InterVLS in a user study, illustrating its functionality with two scenarios. Results indicates that InterVLS is effective to help users identify influential concepts to a model, gain insights and adjust concept influence to improve the model. We conclude with a discussion based on our study results.
翻译:深度学习模型在关键应用中的广泛使用,凸显了部署前模型理解与改进的必要性。基于视觉概念的方法虽越来越多地用于此目的,但仍面临挑战:(1) 大多数概念缺乏可解释性,(2) 现有方法需要模型知识,而这些知识在运行时往往不可用。此外,(3) 目前缺乏一种用于理解后模型改进的无代码方法。针对这些问题,我们提出了InterVLS。该系统通过发现与文本对齐的概念,并使用与模型无关的线性代理来衡量其影响,从而促进模型理解。InterVLS采用可视分析技术,提供基于概念的解释和性能洞察。它允许用户调整概念影响来更新模型,实现无代码的模型改进。我们通过一项用户研究评估InterVLS,并用两个场景说明了其功能。结果表明,InterVLS能有效帮助用户识别对模型有影响的概念,获得洞察,并通过调整概念影响来改进模型。最后,我们基于研究结果进行了讨论。