Neural Marked Temporal Point Processes (MTPP) are flexible models to capture complex temporal inter-dependencies between labeled events. These models inherently learn two predictive distributions: one for the arrival times of events and another for the types of events, also known as marks. In this study, we demonstrate that learning a MTPP model can be framed as a two-task learning problem, where both tasks share a common set of trainable parameters that are optimized jointly. We show that this often leads to the emergence of conflicting gradients during training, where task-specific gradients are pointing in opposite directions. When such conflicts arise, following the average gradient can be detrimental to the learning of each individual tasks, resulting in overall degraded performance. To overcome this issue, we introduce novel parametrizations for neural MTPP models that allow for separate modeling and training of each task, effectively avoiding the problem of conflicting gradients. Through experiments on multiple real-world event sequence datasets, we demonstrate the benefits of our framework compared to the original model formulations.
翻译:神经标记时序点过程(MTPP)是捕捉带标签事件间复杂时序依赖关系的灵活模型。这些模型本质上学习两个预测分布:一个用于事件到达时间,另一个用于事件类型(即标记)。本研究表明,学习MTPP模型可被构建为双任务学习问题,其中两个任务共享一组通过联合优化的可训练参数。我们证明这往往导致训练过程中出现梯度冲突——任务特定梯度指向相反方向。当此类冲突发生时,遵循平均梯度可能损害每个独立任务的学习效果,导致整体性能下降。为解决该问题,我们提出了神经MTPP模型的新参数化方法,允许对每个任务进行独立建模和训练,从而有效避免梯度冲突问题。通过在多个真实世界事件序列数据集上的实验,我们证明了该框架相较于原始模型公式的优势。