Large Language Models (LLMs) are increasingly becoming ubiquitous, yet their ability to reason about and retain temporal information remains limited. This hinders their application in real-world scenarios where understanding the sequential nature of events is crucial. This paper experiments with state-of-the-art models on a novel, large-scale temporal dataset, \textbf{TempUN}, to reveal significant limitations in temporal retention and reasoning abilities. Interestingly, closed-source models indicate knowledge gaps more frequently, potentially suggesting a trade-off between uncertainty awareness and incorrect responses. Further, exploring various fine-tuning approaches yielded no major performance improvements. The associated dataset and code are available at the following URL (https://github.com/lingoiitgn/TempUN).
翻译:大型语言模型(LLMs)日益普及,但其对时间信息的推理与记忆能力仍存在局限。这阻碍了其在需要理解事件时序性的真实场景中的应用。本文基于新型大规模时间数据集**TempUN**对前沿模型进行实验,揭示了时间信息保持与推理能力的显著缺陷。有趣的是,闭源模型更频繁地暴露知识空缺,这暗示其在不确定性感知与错误回答之间可能存在权衡。此外,探索多种微调方法并未带来显著的性能提升。相关数据集与代码可通过以下网址获取:https://github.com/lingoiitgn/TempUN