Deep learning models are complex due to their size, structure, and inherent randomness in training procedures. Additional complexity arises from the selection of datasets and inductive biases. Addressing these challenges for explainability, Kim et al. (2018) introduced Concept Activation Vectors (CAVs), which aim to understand deep models' internal states in terms of human-aligned concepts. These concepts correspond to directions in latent space, identified using linear discriminants. Although this method was first applied to image classification, it was later adapted to other domains, including natural language processing. In this work, we attempt to apply the method to electroencephalogram (EEG) data for explainability in Kostas et al.'s BENDR (2021), a large-scale transformer model. A crucial part of this endeavor involves defining the explanatory concepts and selecting relevant datasets to ground concepts in the latent space. Our focus is on two mechanisms for EEG concept formation: the use of externally labeled EEG datasets, and the application of anatomically defined concepts. The former approach is a straightforward generalization of methods used in image classification, while the latter is novel and specific to EEG. We present evidence that both approaches to concept formation yield valuable insights into the representations learned by deep EEG models.
翻译:深度学习模型因其规模、结构及训练过程中固有的随机性而变得复杂。数据集的选择与归纳偏好的设定进一步增加了其复杂性。为应对可解释性方面的这些挑战,Kim等人(2018)提出了概念激活向量(CAVs),旨在通过人类可理解的概念来阐释深度模型的内部状态。这些概念对应于潜在空间中的方向,可通过线性判别方法进行识别。尽管该方法最初应用于图像分类领域,但随后被扩展至自然语言处理等其他领域。本研究尝试将该方法应用于脑电图(EEG)数据,以解释Kostas等人(2021)提出的大规模Transformer模型——BENDR。此项工作的关键环节在于定义解释性概念,并选择相关数据集以在潜在空间中锚定概念。我们重点关注两种脑电图概念构建机制:利用外部标注的脑电图数据集,以及应用基于解剖学定义的概念。前者是图像分类方法的直接推广,而后者则是针对脑电图数据提出的创新方法。我们提供的证据表明,这两种概念构建方法均能有效揭示深度脑电图模型所学习表征的内在规律。