Traditional sequential recommendation (SR) models learn low-dimensional item ID embeddings from user-item interactions, often overlooking textual information such as item titles or descriptions. Recent advances in Large Language Models (LLMs) have inspired a surge of research that encodes item textual information with high-dimensional semantic embeddings, and designs transformation methods to inject such embeddings into SR models. These embedding transformation strategies can be categorized into two types, both of which exhibits notable drawbacks: 1) adapter-based methods suffer from pronounced dimension collapse, concentrating information into a few dominant dimensions; 2) SVD-based methods are rigid and manual, considering only a few principal spectral components while discarding rich information in the remaining spectrum. To address these limitations, we propose SpecTran, a spectral-aware transformer-based adapter that operates in the spectral domain, attending to the full spectrum to select and aggregates informative components. A learnable spectral-position encoding injects singular-value cues as an inductive bias, guiding attention toward salient spectral components and promoting diversity across embedding dimensions. Across four real-world datasets and three SR backbones, it consistently outperforms strong baselines, achieving an average improvement of 9.17%.
翻译:传统序列推荐模型从用户-物品交互中学习低维物品ID嵌入,通常忽略物品标题或描述等文本信息。大语言模型的最新进展激发了大量研究,这些研究使用高维语义嵌入编码物品文本信息,并设计转换方法将此类嵌入注入序列推荐模型。现有嵌入转换策略可分为两类,均存在显著缺陷:1)基于适配器的方法存在明显维度坍缩,将信息集中于少数主导维度;2)基于奇异值分解的方法僵化且依赖人工设计,仅考虑少数主谱成分而丢弃剩余频谱中的丰富信息。为克服这些局限,我们提出SpecTran——一种在谱域运行的谱感知Transformer适配器,其通过关注全频谱来选择并聚合信息成分。可学习的谱位置编码将奇异值线索作为归纳偏置注入,引导注意力聚焦于显著谱成分并促进嵌入维度的多样性。在四个真实数据集和三种序列推荐骨干模型上的实验表明,该方法持续优于强基线模型,平均提升幅度达9.17%。