Deep learning has significantly improved time series classification, yet the lack of explainability in these models remains a major challenge. While Explainable AI (XAI) techniques aim to make model decisions more transparent, their effectiveness is often hindered by the high dimensionality and noise present in raw time series data. In this work, we investigate whether transforming time series into discrete latent representations-using methods such as Vector Quantized Variational Autoencoders (VQ-VAE) and Discrete Variational Autoencoders (DVAE)-not only preserves but enhances explainability by reducing redundancy and focusing on the most informative patterns. We show that applying XAI methods to these compressed representations leads to concise and structured explanations that maintain faithfulness without sacrificing classification performance. Additionally, we propose Similar Subsequence Accuracy (SSA), a novel metric that quantitatively assesses the alignment between XAI-identified salient subsequences and the label distribution in the training data. SSA provides a systematic way to validate whether the features highlighted by XAI methods are truly representative of the learned classification patterns. Our findings demonstrate that discrete latent representations not only retain the essential characteristics needed for classification but also offer a pathway to more compact, interpretable, and computationally efficient explanations in time series analysis.
翻译:深度学习显著提升了时间序列分类的性能,但这些模型缺乏可解释性仍然是一个主要挑战。尽管可解释人工智能(XAI)技术旨在使模型决策更加透明,但其有效性往往受到原始时间序列数据的高维性和噪声的阻碍。在本工作中,我们研究了将时间序列转换为离散潜在表示(使用如向量量化变分自编码器(VQ-VAE)和离散变分自编码器(DVAE)等方法)是否不仅保留而且通过减少冗余并聚焦于最具信息量的模式来增强可解释性。我们证明,将XAI方法应用于这些压缩表示能产生简洁且结构化的解释,这些解释在保持忠实度的同时不牺牲分类性能。此外,我们提出了相似子序列准确度(SSA),这是一种新颖的度量指标,用于定量评估XAI识别出的显著子序列与训练数据中标签分布之间的一致性。SSA提供了一种系统化的方法来验证XAI方法所强调的特征是否真正代表了学习到的分类模式。我们的研究结果表明,离散潜在表示不仅保留了分类所需的基本特征,而且为时间序列分析提供了更紧凑、可解释且计算效率更高的解释途径。