Understanding deep neural network (DNN) behavior requires more than evaluating classification accuracy alone; analyzing errors and their predictability is equally crucial. Current evaluation methodologies lack transparency, particularly in explaining the underlying causes of network misclassifications. To address this, we introduce a novel framework that investigates the relationship between the semantic hierarchy depth perceived by a network and its real-data misclassification patterns. Central to our framework is the Similarity Depth (SD) metric, which quantifies the semantic hierarchy depth perceived by a network along with a method of evaluation of how closely the network's errors align with its internally perceived similarity structure. We also propose a graph-based visualization of model semantic relationships and misperceptions. A key advantage of our approach is that leveraging class templates -- representations derived from classifier layer weights -- is applicable to already trained networks without requiring additional data or experiments. Our approach reveals that deep vision networks encode specific semantic hierarchies and that high semantic depth improves the compliance between perceived class similarities and actual errors.
翻译:理解深度神经网络(DNN)的行为,仅评估分类准确率是不够的;分析错误及其可预测性同样至关重要。当前的评估方法缺乏透明度,特别是在解释网络误分类的根本原因方面。为解决这一问题,我们引入了一个新颖的框架,该框架研究网络感知的语义层次深度与其真实数据误分类模式之间的关系。我们框架的核心是相似性深度(SD)度量,该度量量化了网络感知的语义层次深度,并提供了一种评估网络错误与其内部感知的相似性结构吻合程度的方法。我们还提出了一种基于图表的模型语义关系与错误感知可视化方法。我们方法的一个关键优势在于,利用类别模板——源自分类器层权重的表示——适用于已训练的网络,无需额外数据或实验。我们的方法揭示了深度视觉网络编码了特定的语义层次,并且高语义深度提升了感知类别相似性与实际错误之间的吻合度。