Sequential Recommender Systems (SRS) aim to model sequential behaviors of users to capture their interests which usually evolve over time. Transformer-based SRS have achieved distinguished successes recently. However, studies reveal self-attention mechanism in Transformer-based models is essentially a low-pass filter and ignores high frequency information potentially including meaningful user interest patterns. This motivates us to seek better filtering technologies for SRS, and finally we find Discrete Wavelet Transform (DWT), a famous time-frequency analysis technique from digital signal processing field, can effectively process both low-frequency and high-frequency information. We design an adaptive time-frequency filter with DWT technique, which decomposes user interests into multiple signals with different frequency and time, and can automatically learn weights of these signals. Furthermore, we develop DWTRec, a model for sequential recommendation all based on the adaptive time-frequency filter. Thanks to fast DWT technique, DWTRec has a lower time complexity and space complexity theoretically, and is Proficient in modeling long sequences. Experiments show that our model outperforms state-of-the-art baseline models in datasets with different domains, sparsity levels and average sequence lengths. Especially, our model shows great performance increase in contrast with previous models when the sequence grows longer, which demonstrates another advantage of our model.
翻译:序列推荐系统旨在建模用户的序列行为以捕捉其通常随时间演变的兴趣。近年来,基于Transformer的序列推荐系统取得了显著成功。然而,研究表明Transformer模型中的自注意力机制本质上是一种低通滤波器,忽略了可能包含有意义用户兴趣模式的高频信息。这促使我们为序列推荐系统寻求更好的过滤技术,最终我们发现离散小波变换——一种来自数字信号处理领域的著名时频分析技术——能够有效处理低频和高频信息。我们利用离散小波变换技术设计了一种自适应时频滤波器,该滤波器将用户兴趣分解为具有不同频率和时间的多个信号,并能自动学习这些信号的权重。此外,我们开发了DWTRec,一个完全基于该自适应时频滤波器的序列推荐模型。得益于快速的离散小波变换技术,DWTRec在理论上具有更低的时间复杂度和空间复杂度,并擅长建模长序列。实验表明,我们的模型在不同领域、稀疏度和平均序列长度的数据集上均优于最先进的基线模型。特别是,当序列变长时,与先前模型相比,我们的模型表现出显著的性能提升,这展示了我们模型的另一优势。