Sequential recommender systems must model long-range user behavior while operating under strict memory and latency constraints. Transformer-based approaches achieve strong accuracy but suffer from quadratic attention complexity, forcing aggressive truncation of user histories and limiting their practicality for long-horizon modeling. This paper presents HoloMambaRec, a lightweight sequential recommendation architecture that combines holographic reduced representations for attribute-aware embedding with a selective state space encoder for linear-time sequence processing. Item and attribute information are bound using circular convolution, preserving embedding dimensionality while encoding structured metadata. A shallow selective state space backbone, inspired by recent Mamba-style models, enables efficient training and constant-time recurrent inference. Experiments on Amazon Beauty and MovieLens-1M under a 10-epoch budget show that HoloMambaRec surpasses SASRec on both datasets, attains state-of-the-art ranking on MovieLens-1M, and trails only GRU4Rec on Amazon Beauty, all while maintaining substantially lower memory complexity. The design further incorporates forward-compatible mechanisms for temporal bundling and inference-time compression, positioning HoloMambaRec as a practical and extensible alternative for scalable, metadata-aware sequential recommendation.
翻译:序列推荐系统必须在严格的内存与延迟约束下运行,同时建模长程用户行为。基于Transformer的方法虽能实现较高的准确性,但其二次注意力复杂度导致必须对用户历史进行激进截断,限制了其在长时程建模中的实用性。本文提出HoloMambaRec,一种轻量级序列推荐架构,它结合了面向属性感知嵌入的全息降维表示与用于线性时间序列处理的选择性状态空间编码器。项目与属性信息通过循环卷积进行绑定,在保持嵌入维度的同时编码结构化元数据。受近期Mamba风格模型启发,采用浅层选择性状态空间主干网络,实现了高效训练与常数时间循环推理。在10轮训练预算下,于Amazon Beauty和MovieLens-1M数据集上的实验表明:HoloMambaRec在两个数据集上均超越SASRec,在MovieLens-1M上达到最先进的排序性能,在Amazon Beauty上仅略逊于GRU4Rec,且始终保持显著更低的内存复杂度。该设计进一步整合了面向时间捆绑与推理时压缩的前向兼容机制,使HoloMambaRec成为可扩展、元数据感知序列推荐系统中兼具实用性与可扩展性的替代方案。