Recent advances in LLM-based recommendation have shown promise, yet their cross-domain generalization is hindered by a fundamental mismatch between language-centric pretraining and the recommendation task. Existing methods, relying on language-level knowledge, fail to capture dynamic, item-level user interests across domains. To bridge this gap, we propose RecBase, a domain-agnostic foundational model pretrained with a recommendation-oriented objective. RecBase leverages a large-scale, heterogeneous, cross-domain corpus with unified textual representations and feature mappings to enhance cross-domain generalization. To further align item semantics across domains, we introduce a unified item tokenizer that encodes items into hierarchical concept identifiers, enabling structured representation and efficient vocabulary sharing. The model is trained using an autoregressive objective to capture complex item-level sequential patterns. On eight real-world datasets, our 1.5B-parameter model matches or surpasses the performance of LLM baselines up to 7B parameters in zero-shot and cross-domain recommendation tasks.
翻译:基于大语言模型的推荐系统近期取得显著进展,但其跨领域泛化能力受到语言中心式预训练与推荐任务间根本性错配的制约。现有方法依赖语言层面的知识,难以捕捉跨领域中动态的、物品级别的用户兴趣。为弥合这一差距,我们提出RecBase——一种采用推荐导向目标预训练的领域无关基础模型。RecBase利用大规模、异构、跨领域的语料库,通过统一的文本表征与特征映射来增强跨领域泛化能力。为进一步对齐跨领域物品语义,我们引入统一的物品分词器,将物品编码为分层概念标识符,实现结构化表征与高效词表共享。该模型采用自回归目标进行训练,以捕捉复杂的物品级序列模式。在八个真实数据集上的实验表明,我们的15亿参数模型在零样本与跨领域推荐任务中,其性能可匹配甚至超越参数量达70亿的大语言模型基线。