The anchor-document data derived from web graphs offers a wealth of paired information for training dense retrieval models in an unsupervised manner. However, the presence of inherent noise invariably compromises the robustness of training dense retrieval models, consequently hurting the performance. In this paper, we introduce WebDRO, an efficient approach for clustering the web graph data and optimizing group weights to enhance the robustness of the pretraining process of dense retrieval models on web graphs. Initially, we build an embedding model for clustering anchor-document pairs. Specifically, we contrastively train the embedding model for link prediction, which guides the embedding model in capturing the inherent document features behind the web graph links. Subsequently, we employ the group distributional robust optimization to recalibrate the weights across different clusters of anchor-document pairs during training dense retrieval models, directing the model to assign higher weights to clusters with higher loss and focus more on worst-case scenarios. Our experiments conducted on MS MARCO and BEIR demonstrate that our method can effectively improve retrieval performance in unsupervised training settings. Further analysis confirms the stability and validity of group weights learned by WebDRO. All codes will be released via GitHub.
翻译:源于Web图的锚点-文档数据为无监督训练稠密检索模型提供了丰富的配对信息。然而,固有噪声的存在始终会损害稠密检索模型训练的鲁棒性,进而影响性能。本文提出WebDRO,一种高效聚类Web图数据并优化组权重的方法,旨在增强基于Web图的稠密检索模型预训练过程的鲁棒性。首先,我们构建用于锚点-文档对聚类的嵌入模型。具体而言,我们通过对比学习训练嵌入模型进行链接预测,引导其捕获Web图链接背后的文档本质特征。随后,引入组分布鲁棒优化方法,在训练稠密检索模型时对不同聚类中的锚点-文档对权重进行重新校准,促使模型为损失较高的聚类赋予更大权重,并更加关注最坏情况。在MS MARCO和BEIR上的实验表明,该方法能有效提升无监督训练环境下的检索性能。进一步分析验证了WebDRO所学习组权重的稳定性与有效性。所有代码将通过GitHub开源。