Negative sampling is a pivotal technique in implicit collaborative filtering (CF) recommendation, enabling efficient and effective training by contrasting observed interactions with sampled unobserved ones. Recently, large language models (LLMs) have shown promise in recommender systems; however, research on LLM-empowered negative sampling remains underexplored. Existing methods heavily rely on textual information and task-specific fine-tuning, limiting practical applicability. To address this limitation, we propose a text-free and fine-tuning-free Dual-Tree LLM-enhanced Negative Sampling method (DTL-NS). It consists of two modules: (i) an offline false negative identification module that leverages hierarchical index trees to transform collaborative structural and latent semantic information into structured item-ID encodings for LLM inference, enabling accurate identification of false negatives; and (ii) a multi-view hard negative sampling module that combines user-item preference scores with item-item hierarchical similarities from these encodings to mine high-quality hard negatives, thus improving models' discriminative ability. Extensive experiments demonstrate the effectiveness of DTL-NS. For example, on the Amazon-sports dataset, DTL-NS outperforms the strongest baseline by 10.64% and 19.12% in Recall@20 and NDCG@20, respectively. Moreover, DTL-NS can be integrated into various implicit CF models and negative sampling methods, consistently enhancing their performance.
翻译:负采样是隐式协同过滤推荐中的关键技术,通过对比已观测交互与采样的未观测交互实现高效训练。近年来,大语言模型在推荐系统中展现出潜力,但基于LLM的负采样研究仍待深入。现有方法严重依赖文本信息与任务特定微调,限制了实际应用。为突破此局限,我们提出无需文本与微调的双树LLM增强负采样方法。该方法包含两个模块:(i)离线假负例识别模块,利用层次索引树将协同结构信息与潜在语义信息转化为结构化物品ID编码供LLM推理,实现精准假负例识别;(ii)多视图难负例采样模块,结合用户-物品偏好分数与物品-物品层次相似度挖掘高质量难负例,从而提升模型判别能力。大量实验验证了本方法的有效性:在Amazon-sports数据集上,Recall@20与NDCG@20指标分别超越最强基线10.64%与19.12%。此外,本方法可灵活集成于多种隐式协同过滤模型与负采样方法中,持续提升其性能。