In large scale recommendation systems like the LinkedIn Feed, the retrieval stage is critical for narrowing hundreds of millions of potential candidates to a manageable subset for ranking. LinkedIn's Feed serves suggested content from outside of the member's network (based on the member's topical interests), where 2000 candidates are retrieved from a pool of hundreds of millions candidate with a latency budget of a few milliseconds and inbound QPS of several thousand per second. This paper presents a novel retrieval approach that fine-tunes a large causal language model (Meta's LLaMA 3) as a dual encoder to generate high quality embeddings for both users (members) and content (items), using only textual input. We describe the end to end pipeline, including prompt design for embedding generation, techniques for fine-tuning at LinkedIn's scale, and infrastructure for low latency, cost effective online serving. We share our findings on how quantizing numerical features in the prompt enables the information to get properly encoded in the embedding, facilitating greater alignment between the retrieval and ranking layer. The system was evaluated using offline metrics and an online A/B test, which showed substantial improvements in member engagement. We observed significant gains among newer members, who often lack strong network connections, indicating that high-quality suggested content aids retention. This work demonstrates how generative language models can be effectively adapted for real time, high throughput retrieval in industrial applications.
翻译:在LinkedIn Feed等大规模推荐系统中,检索阶段对于将数亿潜在候选项缩减至可管理子集以供排序至关重要。LinkedIn Feed提供来自用户网络之外的建议内容(基于用户主题兴趣),需在数毫秒延迟预算和每秒数千次查询请求的条件下,从数亿候选项池中检索出2000个候选。本文提出一种新颖的检索方法:通过微调大型因果语言模型(Meta的LLaMA 3)作为双编码器,仅使用文本输入即可为用户(会员)和内容(项目)生成高质量嵌入向量。我们描述了端到端流程,包括嵌入生成的提示设计、适应LinkedIn规模的微调技术,以及支持低延迟、高成本效益在线服务的基础架构。我们分享了关于提示中数值特征量化如何使信息在嵌入向量中得到正确编码的研究发现,从而促进检索层与排序层之间更好的对齐。该系统通过离线指标和在线A/B测试进行评估,结果显示用户参与度显著提升。我们观察到新会员群体(通常缺乏强网络连接)获得显著增益,表明高质量的建议内容有助于提升用户留存。本工作证明了生成式语言模型如何有效适配工业应用中的实时高吞吐检索场景。