Cross-Domain Sequential Recommendation (CDSR) seeks to improve user preference modeling by transferring knowledge from multiple domains. Despite the progress made in CDSR, most existing methods rely on overlapping users or items to establish cross-domain correlations-a requirement that rarely holds in real-world settings. The advent of large language models (LLM) and model-merging techniques appears to overcome this limitation by unifying multi-domain data without explicit overlaps. Yet, our empirical study shows that naively training an LLM on combined domains-or simply merging several domain-specific LLMs-often degrades performance relative to a model trained solely on the target domain. To address these challenges, we first experimentally investigate the cause of suboptimal performance in LLM-based cross-domain recommendation and model merging. Building on these insights, we introduce WeaveRec, which cross-trains multiple LoRA modules with source and target domain data in a weaving fashion, and fuses them via model merging. WeaveRec can be extended to multi-source domain scenarios and notably does not introduce additional inference-time cost in terms of latency or memory. Furthermore, we provide a theoretical guarantee that WeaveRec can reduce the upper bound of the expected error in the target domain. Extensive experiments on single-source, multi-source, and cross-platform cross-domain recommendation scenarios validate that WeaveRec effectively mitigates performance degradation and consistently outperforms baseline approaches in real-world recommendation tasks.
翻译:跨域序列推荐旨在通过迁移多个领域的知识来提升用户偏好建模能力。尽管该领域已取得一定进展,但现有方法大多依赖重叠的用户或物品来建立跨域关联——这一要求在现实场景中往往难以满足。大语言模型与模型融合技术的出现,似乎能够在不依赖显式重叠的情况下统一多领域数据,从而克服这一局限。然而,我们的实证研究表明,简单地在合并领域上训练大语言模型,或直接融合多个领域专用的大语言模型,其性能往往低于仅在目标领域训练的模型。为应对这些挑战,我们首先通过实验探究了基于大语言模型的跨域推荐与模型融合中性能欠佳的原因。基于这些发现,我们提出了WeaveRec框架,该框架以交织方式利用源领域与目标领域数据交叉训练多个LoRA模块,并通过模型融合技术将其整合。WeaveRec可扩展至多源领域场景,且不会在推理时引入额外的延迟或内存开销。此外,我们从理论上证明了WeaveRec能够降低目标领域期望误差的上界。在单源、多源及跨平台跨域推荐场景上的大量实验表明,WeaveRec有效缓解了性能退化问题,并在实际推荐任务中持续优于基线方法。