Entity Matching (EM)--the task of determining whether two data records refer to the same real-world entity--is a core task in data integration. Recent advances in deep learning have set a new standard for EM, particularly through fine-tuning Pretrained Language Models (PLMs) and, more recently, Large Language Models (LLMs). However, fine-tuning typically requires large amounts of labeled data, which are expensive and time-consuming to obtain. In the context of e-commerce matching, labeling scarcity varies widely across domains, raising the question of how to intelligently train accurate domain-specific EM models with limited labeled data. In this work we assume users have only a limited amount of labels for a specific target domain but have access to labeled data from other domains. We introduce BEACON, a distribution-aware, budget-aware framework for low-resource EM across domains. BEACON leverages the insight that embedding representations of pairwise candidate matches can guide the effective selection of out-of-domain samples under limited in-domain supervision. We conduct extensive experiments across multiple domain-partitioned datasets derived from established EM benchmarks, demonstrating that BEACON consistently outperforms state-of-the-art methods under different training budgets.
翻译:实体匹配(Entity Matching,EM)——判断两条数据记录是否指向同一现实世界实体的任务——是数据集成中的核心任务。深度学习的最新进展为EM设定了新标准,特别是通过微调预训练语言模型(Pretrained Language Models,PLMs)以及最近的大型语言模型(Large Language Models,LLMs)。然而,微调通常需要大量标注数据,这些数据的获取成本高昂且耗时。在电子商务匹配场景中,不同领域的标注稀缺性差异显著,这引出了一个关键问题:如何在有限标注数据下智能地训练准确的领域特定EM模型。本工作中,我们假设用户仅拥有特定目标领域的有限标注数据,但可获取来自其他领域的标注数据。我们提出了BEACON,一种面向跨领域低资源EM的分布感知、预算感知框架。BEACON基于以下洞见:在有限领域内监督下,成对候选匹配的嵌入表示能够指导跨领域样本的有效选择。我们在基于现有EM基准构建的多个领域划分数据集上进行了广泛实验,结果表明BEACON在不同训练预算下均持续优于现有最先进方法。