Transparent models, which are machine learning models that produce inherently interpretable predictions, are receiving significant attention in high-stakes domains. However, despite much real-world data being collected as time series, there is a lack of studies on transparent time series models. To address this gap, we propose a novel transparent neural network model for time series called Generalized Additive Time Series Model (GATSM). GATSM consists of two parts: 1) independent feature networks to learn feature representations, and 2) a transparent temporal module to learn temporal patterns across different time steps using the feature representations. This structure allows GATSM to effectively capture temporal patterns and handle dynamic-length time series while preserving transparency. Empirical experiments show that GATSM significantly outperforms existing generalized additive models and achieves comparable performance to black-box time series models, such as recurrent neural networks and Transformer. In addition, we demonstrate that GATSM finds interesting patterns in time series. The source code is available at https://github.com/gim4855744/GATSM.
翻译:透明模型(即能够产生内在可解释预测的机器学习模型)在高风险领域正受到广泛关注。然而,尽管现实世界中的大量数据以时间序列形式采集,针对透明时间序列模型的研究却十分缺乏。为填补这一空白,我们提出了一种新颖的透明时间序列神经网络模型,称为广义可加时间序列模型(GATSM)。GATSM由两部分组成:1)用于学习特征表示的独立特征网络;2)一个透明时序模块,该模块利用特征表示来学习不同时间步之间的时序模式。这种结构使得GATSM能够有效捕捉时序模式、处理动态长度的时间序列,同时保持透明性。实证实验表明,GATSM显著优于现有的广义可加模型,并在性能上与循环神经网络和Transformer等黑盒时间序列模型相当。此外,我们证明了GATSM能够在时间序列中发现有意义的模式。源代码可在 https://github.com/gim4855744/GATSM 获取。