Reasoning about time is essential for Large Language Models (LLMs) to understand the world. Previous works focus on solving specific tasks, primarily on time-sensitive question answering. While these methods have proven effective, they cannot generalize to a wider spectrum of temporal reasoning tasks. Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks? To that end, we systematically study 38 temporal reasoning tasks. Based on the observation that 19 tasks are directly related to mathematics, we first leverage the available mathematical dataset to set a solid foundation for temporal reasoning. However, the in-depth study indicates that focusing solely on mathematical enhancement falls short of addressing pure temporal reasoning tasks. To mitigate this limitation, we propose a simple but effective self-critic temporal optimization method to enhance the model's temporal reasoning capabilities without sacrificing general task abilities. Finally, we develop Timo, a model designed to excel in temporal reasoning at the 7B and 13B scales. Notably, Timo outperforms the counterpart LLMs by 10.0 and 7.6 in average accuracy scores and achieves the new state-of-the-art (SOTA) performance of comparable size. Extensive experiments further validate our framework's effectiveness and its generalization across diverse temporal tasks. The code is available at https://github.com/zhaochen0110/Timo.
翻译:时序推理对于大语言模型理解世界至关重要。先前研究主要集中于解决特定任务,尤其是时间敏感型问答。尽管这些方法已被证明有效,但无法泛化至更广泛的时序推理任务范畴。因此,我们提出一个关键问题:能否构建一个通用框架来处理多样化的时序推理任务?为此,我们系统性地研究了38项时序推理任务。基于其中19项任务与数学直接相关的观察,我们首先利用现有数学数据集为时序推理奠定坚实基础。然而,深入研究表明仅聚焦数学增强不足以解决纯粹时序推理任务。为克服此局限,我们提出一种简单而有效的自批判时序优化方法,在不牺牲通用任务能力的前提下增强模型的时序推理性能。最终,我们开发了Timo模型,该模型专为在7B和13B规模上实现卓越时序推理能力而设计。值得注意的是,Timo在平均准确率上分别超越同规模大语言模型10.0和7.6分,并达到可比规模下的最新最优性能。大量实验进一步验证了我们框架的有效性及其在多样化时序任务中的泛化能力。代码发布于https://github.com/zhaochen0110/Timo。