The automatic detection of temporal relations among events has been mainly investigated with encoder-only models such as RoBERTa. Large Language Models (LLM) have recently shown promising performance in temporal reasoning tasks such as temporal question answering. Nevertheless, recent studies have tested the LLMs' performance in detecting temporal relations of closed-source models only, limiting the interpretability of those results. In this work, we investigate LLMs' performance and decision process in the Temporal Relation Classification task. First, we assess the performance of seven open and closed-sourced LLMs experimenting with in-context learning and lightweight fine-tuning approaches. Results show that LLMs with in-context learning significantly underperform smaller encoder-only models based on RoBERTa. Then, we delve into the possible reasons for this gap by applying explainable methods. The outcome suggests a limitation of LLMs in this task due to their autoregressive nature, which causes them to focus only on the last part of the sequence. Additionally, we evaluate the word embeddings of these two models to better understand their pre-training differences. The code and the fine-tuned models can be found respectively on GitHub.
翻译:事件间时序关系的自动检测主要采用RoBERTa等仅编码器模型进行研究。近期,大语言模型在时序推理任务(如时序问答)中展现出有前景的性能。然而,现有研究仅测试了闭源大语言模型在时序关系检测中的表现,限制了这些结果的可解释性。本研究深入探究大语言模型在时序关系分类任务中的性能与决策机制。首先,我们评估了七种开源与闭源大语言模型在上下文学习和轻量微调方法下的表现。实验结果表明,采用上下文学习的大语言模型显著逊于基于RoBERTa的小型仅编码器模型。随后,我们通过可解释性方法深入分析造成这种差距的可能原因。结果显示,大语言模型的自回归特性导致其仅关注序列尾部信息,这成为其在该任务中的主要局限。此外,我们对比评估了两类模型的词嵌入表示,以深入理解其预训练阶段的差异。相关代码与微调模型已发布于GitHub平台。