In this study, we explore an emerging research area of Continual Learning for Temporal Sensitive Question Answering (CLTSQA). Previous research has primarily focused on Temporal Sensitive Question Answering (TSQA), often overlooking the unpredictable nature of future events. In real-world applications, it's crucial for models to continually acquire knowledge over time, rather than relying on a static, complete dataset. Our paper investigates strategies that enable models to adapt to the ever-evolving information landscape, thereby addressing the challenges inherent in CLTSQA. To support our research, we first create a novel dataset, divided into five subsets, designed specifically for various stages of continual learning. We then propose a training framework for CLTSQA that integrates temporal memory replay and temporal contrastive learning. Our experimental results highlight two significant insights: First, the CLTSQA task introduces unique challenges for existing models. Second, our proposed framework effectively navigates these challenges, resulting in improved performance.
翻译:本研究探索了一个新兴的研究领域——时序敏感问答的持续学习(CLTSQA)。先前的研究主要集中于时序敏感问答(TSQA),往往忽略了未来事件的不可预测性。在实际应用中,模型能够随时间持续获取知识,而非依赖静态的完整数据集,这一点至关重要。本文研究了使模型能够适应不断变化的信息环境的策略,从而应对CLTSQA中固有的挑战。为支持本研究,我们首先创建了一个新颖的数据集,该数据集被划分为五个子集,专门针对持续学习的各个阶段设计。随后,我们提出了一个用于CLTSQA的训练框架,该框架整合了时序记忆回放和时序对比学习。我们的实验结果突出了两个重要发现:首先,CLTSQA任务对现有模型提出了独特的挑战;其次,我们提出的框架能够有效应对这些挑战,从而提升了模型性能。