Large language models (LLMs) have showcased remarkable reasoning capabilities, yet they remain susceptible to errors, particularly in temporal reasoning tasks involving complex temporal logic. Existing research has explored LLM performance on temporal reasoning using diverse datasets and benchmarks. However, these studies often rely on real-world data that LLMs may have encountered during pre-training or employ anonymization techniques that can inadvertently introduce factual inconsistencies. In this work, we address these limitations by introducing novel synthetic datasets specifically designed to assess LLM temporal reasoning abilities in various scenarios. The diversity of question types across these datasets enables systematic investigation into the impact of the problem structure, size, question type, fact order, and other factors on LLM performance. Our findings provide valuable insights into the strengths and weaknesses of current LLMs in temporal reasoning tasks. To foster further research in this area, we are open-sourcing the datasets and evaluation framework used in our experiments: https://huggingface.co/datasets/baharef/ToT.
翻译:大语言模型(LLMs)已展现出卓越的推理能力,但在涉及复杂时序逻辑的时序推理任务中仍易产生错误。现有研究通过多种数据集和基准测试探讨了LLMs在时序推理上的表现。然而,这些研究通常依赖LLMs在预训练阶段可能接触过的真实数据,或采用可能无意中引入事实不一致性的匿名化技术。本研究通过引入专门设计的新型合成数据集来应对这些局限,这些数据集旨在评估LLMs在不同场景下的时序推理能力。这些数据集中问题类型的多样性使得我们能够系统性地研究问题结构、规模、问题类型、事实顺序等因素对LLM性能的影响。我们的研究结果为当前LLMs在时序推理任务中的优势与不足提供了重要见解。为促进该领域的进一步研究,我们开源了实验中使用的数据集与评估框架:https://huggingface.co/datasets/baharef/ToT。