The underperformance of existing multimodal large language models for time series reasoning lies in the absence of rationale priors that connect temporal observations to their downstream outcomes, which leads models to rely on superficial pattern matching rather than principled reasoning. We therefore propose the rationale-grounded in-context learning for time series reasoning, where rationales work as guiding reasoning units rather than post-hoc explanations, and develop the RationaleTS method. Specifically, we firstly induce label-conditioned rationales, composed of reasoning paths from observable evidence to the potential outcomes. Then, we design the hybrid retrieval by balancing temporal patterns and semantic contexts to retrieve correlated rationale priors for the final in-context inference on new samples. We conduct extensive experiments to demonstrate the effectiveness and efficiency of our proposed RationaleTS on three-domain time series reasoning tasks. We will release our code for reproduction.
翻译:现有用于时间序列推理的多模态大语言模型表现不佳,其根本原因在于缺乏能够将时序观测与其下游结果相连接的原理先验知识,这导致模型依赖于表面的模式匹配而非基于原则的推理。为此,我们提出一种基于原理的情境学习方法用于时间序列推理,其中原理作为引导推理单元而非事后解释,并据此开发了RationaleTS方法。具体而言,我们首先诱导生成标签条件化原理,该原理由从可观测证据到潜在结果的推理路径构成。随后,我们设计了一种混合检索机制,通过平衡时序模式与语义上下文,为最终对新样本的情境推理检索相关的原理先验知识。我们在三个领域的时间序列推理任务上进行了大量实验,以证明所提出的RationaleTS方法的有效性和高效性。我们将公开代码以供复现。