Cross-domain Recommendation (CR) is the task that tends to improve the recommendations in the sparse target domain by leveraging the information from other rich domains. Existing methods of cross-domain recommendation mainly focus on overlapping scenarios by assuming users are totally or partially overlapped, which are taken as bridges to connect different domains. However, this assumption does not always hold since it is illegal to leak users' identity information to other domains. Conducting Non-overlapping MCR (NMCR) is challenging since 1) The absence of overlapping information prevents us from directly aligning different domains, and this situation may get worse in the MCR scenario. 2) The distribution between source and target domains makes it difficult for us to learn common information across domains. To overcome the above challenges, we focus on NMCR, and devise MCRPL as our solution. To address Challenge 1, we first learn shared domain-agnostic and domain-dependent prompts, and pre-train them in the pre-training stage. To address Challenge 2, we further update the domain-dependent prompts with other parameters kept fixed to transfer the domain knowledge to the target domain. We conduct experiments on five real-world domains, and the results show the advance of our MCRPL method compared with several recent SOTA baselines.
翻译:跨域推荐旨在利用其他丰富领域的信息来改进稀疏目标域中的推荐效果。现有跨域推荐方法主要关注重叠场景,假设用户完全或部分重叠,并将这些重叠用户作为连接不同域的桥梁。然而,这一假设并不总是成立,因为将用户身份信息泄露给其他域是不合法的。非重叠多对一跨域推荐面临挑战:1)重叠信息的缺失阻碍了我们直接对齐不同域,这种情况在多对一场景中可能进一步恶化;2)源域与目标域之间的分布差异使得我们难以学习跨域的通用信息。为克服上述挑战,我们聚焦于非重叠多对一跨域推荐,并设计MCRPL作为解决方案。针对挑战1,我们首先学习共享的域无关提示和域相关提示,并在预训练阶段对其进行预训练。针对挑战2,我们进一步固定其他参数,仅更新域相关提示,以将域知识迁移至目标域。我们在五个真实域上开展实验,结果表明我们的MCRPL方法相较于多个最新基准模型具有优越性。