Self-attention-based networks have achieved remarkable performance in sequential recommendation tasks. A crucial component of these models is positional encoding. In this study, we delve into the learned positional embedding, demonstrating that it often captures the distance between tokens. Building on this insight, we introduce novel attention models that directly learn positional relations. Extensive experiments reveal that our proposed models, \textbf{PARec} and \textbf{FPARec} outperform previous self-attention-based approaches.Our code is available at the link for anonymous review: https://anonymous.4open.science/ r/FPARec-2C55/
翻译:基于自注意力的网络在序列推荐任务中取得了显著性能。这些模型的一个关键组成部分是位置编码。在本研究中,我们深入探究了学习式位置嵌入,证明其通常能够捕捉标记之间的距离。基于这一见解,我们引入了直接学习位置关系的新型注意力模型。大量实验表明,我们提出的模型 \textbf{PARec} 和 \textbf{FPARec} 优于以往基于自注意力的方法。我们的代码可在匿名评审链接处获取:https://anonymous.4open.science/ r/FPARec-2C55/