Capsule Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. Moreover, we pay special attention towards analyzing the level to which part-whole relationships are indeed encoded within the learned representation. Our analysis in the MNIST, SVHN, PASCAL-part and CelebA datasets suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
翻译:胶囊网络(CapsNets)作为一种比标准深度神经网络更紧凑且可解释的替代方案被重新提出。虽然近期研究已证实其压缩能力,但迄今为止其可解释性特性尚未得到充分评估。本文通过系统化、原理性的研究来评估此类网络的可解释性。我们特别关注分析部分-整体关系在学习表征中的实际编码程度。在MNIST、SVHN、PASCAL-part和CelebA数据集上的分析表明,胶囊网络编码的表征可能既不像文献中普遍宣称的那样具有解耦性,也未严格遵循部分-整体关系。