With the rapid advancement of pre-trained large language models (LLMs), recent endeavors have leveraged the capabilities of LLMs in relevance modeling, resulting in enhanced performance. This is usually done through the process of fine-tuning LLMs on specifically annotated datasets to determine the relevance between queries and items. However, there are two limitations when LLMs are naively employed for relevance modeling through fine-tuning and inference. First, it is not inherently efficient for performing nuanced tasks beyond simple yes or no answers, such as assessing search relevance. It may therefore tend to be overconfident and struggle to distinguish fine-grained degrees of relevance (e.g., strong relevance, weak relevance, irrelevance) used in search engines. Second, it exhibits significant performance degradation when confronted with data distribution shift in real-world scenarios. In this paper, we propose a novel Distribution-Aware Robust Learning framework (DaRL) for relevance modeling in Alipay Search. Specifically, we design an effective loss function to enhance the discriminability of LLM-based relevance modeling across various fine-grained degrees of query-item relevance. To improve the generalizability of LLM-based relevance modeling, we first propose the Distribution-Aware Sample Augmentation (DASA) module. This module utilizes out-of-distribution (OOD) detection techniques to actively select appropriate samples that are not well covered by the original training set for model fine-tuning. Furthermore, we adopt a multi-stage fine-tuning strategy to simultaneously improve in-distribution (ID) and OOD performance, bridging the performance gap between them. DaRL has been deployed online to serve the Alipay's insurance product search...
翻译:随着预训练大语言模型(LLM)的快速发展,近期研究开始利用LLM的能力进行相关性建模,并取得了性能提升。这通常通过在专门标注的数据集上对LLM进行微调来实现,以判定查询与条目之间的相关性。然而,当通过微调和推理简单地将LLM用于相关性建模时,存在两个局限性。首先,对于超越简单是非判断的精细任务(如评估搜索相关性),该方法本质上并不高效。因此,模型可能倾向于过度自信,难以区分搜索引擎中使用的细粒度相关性等级(例如,强相关、弱相关、不相关)。其次,在现实场景中面临数据分布偏移时,其性能会出现显著下降。本文提出了一种新颖的分布感知鲁棒学习框架(DaRL),用于支付宝搜索中的相关性建模。具体而言,我们设计了一种有效的损失函数,以增强基于LLM的相关性建模在不同细粒度查询-条目相关性等级间的区分能力。为了提升基于LLM的相关性建模的泛化能力,我们首先提出了分布感知样本增强(DASA)模块。该模块利用分布外(OOD)检测技术,主动选择原始训练集未能充分覆盖的合适样本进行模型微调。此外,我们采用多阶段微调策略,以同时提升分布内(ID)和OOD性能,弥合二者之间的性能差距。DaRL已在线部署,服务于支付宝保险产品搜索...