Deep time series models continue to improve predictive performance, yet their deployment remains limited by their black-box nature. In response, existing interpretability approaches in the field keep focusing on explaining the internal model computations, without addressing whether they align or not with how a human would reason about the studied phenomenon. Instead, we state interpretability in deep time series models should pursue semantic alignment: predictions should be expressed in terms of variables that are meaningful to the end user, mediated by spatial and temporal mechanisms that admit user-dependent constraints. In this paper, we formalize this requirement and require that, once established, semantic alignment must be preserved under temporal evolution: a constraint with no analog in static settings. Provided with this definition, we outline a blueprint for semantically aligned deep time series models, identify properties that support trust, and discuss implications for model design.
翻译:深度时间序列模型持续提升预测性能,但其黑箱特性仍限制其实际部署。对此,该领域现有的可解释性方法仍聚焦于解释模型内部计算过程,而未关注这些解释是否与人类对研究现象的推理方式相一致。我们主张,深度时间序列模型的可解释性应追求语义对齐:预测结果需以对终端用户有意义的变量来表达,并通过允许用户相关约束的时空机制进行中介。本文形式化这一要求,并强调语义对齐一旦建立,必须在时间演化过程中保持稳定——这一约束在静态场景中并无对应。基于此定义,我们勾勒出语义对齐深度时间序列模型的构建蓝图,识别支持可信度的关键属性,并探讨其对模型设计的启示。