Sequential recommender systems must model long-range user behavior while operating under strict memory and latency constraints. Transformer-based approaches achieve strong accuracy but suffer from quadratic attention complexity, forcing aggressive truncation of user histories and limiting their practicality for long-horizon modeling. This paper presents HoloMambaRec, a lightweight sequential recommendation architecture that combines holographic reduced representations for attribute-aware embedding with a selective state space encoder for linear-time sequence processing. Item and attribute information are bound using circular convolution, preserving embedding dimensionality while encoding structured metadata. A shallow selective state space backbone, inspired by recent Mamba-style models, enables efficient training and constant-time recurrent inference. Experiments on Amazon Beauty and MovieLens-1M datasets demonstrate that HoloMambaRec consistently outperforms SASRec and achieves competitive performance with GRU4Rec under a constrained 10-epoch training budget, while maintaining substantially lower memory complexity. The design further incorporates forward-compatible mechanisms for temporal bundling and inference-time compression, positioning HoloMambaRec as a practical and extensible alternative for scalable, metadata-aware sequential recommendation.
翻译:序列推荐系统必须在严格的内存和延迟约束下运行,同时建模长程用户行为。基于 Transformer 的方法虽能实现较高的准确性,但其注意力机制的二次复杂度导致必须对用户历史进行激进截断,限制了其在长时域建模中的实用性。本文提出 HoloMambaRec,一种轻量级的序列推荐架构,它结合了用于属性感知嵌入的全息降维表示与用于线性时间序列处理的选择性状态空间编码器。项目与属性信息通过循环卷积进行绑定,在保持嵌入维度的同时编码结构化元数据。受近期 Mamba 风格模型的启发,一个浅层的选择性状态空间主干网络实现了高效的训练和恒定时间的循环推理。在 Amazon Beauty 和 MovieLens-1M 数据集上的实验表明,在受限的 10 轮训练预算下,HoloMambaRec 始终优于 SASRec,并与 GRU4Rec 取得了具有竞争力的性能,同时保持了显著更低的内存复杂度。该设计进一步整合了面向时间捆绑和推理时压缩的前向兼容机制,使 HoloMambaRec 成为可扩展、元数据感知序列推荐的一个实用且可扩展的替代方案。