Entity Resolution (ER) is a critical task for data integration, yet state-of-the-art supervised deep learning models remain impractical for many real-world applications due to their need for massive, expensive-to-obtain labeled datasets. While Active Learning (AL) offers a potential solution to this "label scarcity" problem, existing approaches introduce severe scalability bottlenecks. Specifically, they achieve high accuracy but incur prohibitive computational costs by re-training complex models from scratch or solving NP-hard selection problems in every iteration. In this paper, we propose ALER, a novel, semi-supervised pipeline designed to bridge the gap between semantic accuracy and computational scalability. ALER eliminates the training bottleneck by using a frozen bi-encoder architecture to generate static embeddings once and then iteratively training a lightweight classifier on top. To address the memory bottleneck associated with large-scale candidate pools, we first select a representative sample of the data and then use K-Means to partition this sample into semantically coherent chunks, enabling an efficient AL loop. We further propose a hybrid query strategy that combines "confused" and "confident" pairs to efficiently refine the decision boundary while correcting high-confidence errors.Extensive evaluation demonstrates ALER's superior efficiency, particularly on the large-scale DBLP dataset: it accelerates the training loop by 1.3x while drastically reducing resolution latency by a factor of 3.8 compared to the fastest baseline.
翻译:实体解析(ER)是数据集成中的关键任务,然而,最先进的监督式深度学习模型由于需要大量昂贵标注数据集,在许多实际应用中仍不实用。尽管主动学习(AL)为这一“标签稀缺”问题提供了潜在的解决方案,但现有方法引入了严重的可扩展性瓶颈。具体而言,这些方法虽然实现了高精度,却因每次迭代都需从头重新训练复杂模型或求解NP难选择问题,而产生了过高的计算成本。本文提出ALER,一种新颖的半监督流程,旨在弥合语义准确性与计算可扩展性之间的差距。ALER通过使用冻结的双编码器架构一次性生成静态嵌入,然后在其上迭代训练一个轻量级分类器,从而消除了训练瓶颈。为解决与大规模候选池相关的内存瓶颈,我们首先选取数据的代表性样本,然后使用K-Means将该样本划分为语义连贯的块,从而实现高效的主动学习循环。我们进一步提出一种混合查询策略,结合“混淆”对和“置信”对,以在修正高置信度错误的同时有效细化决策边界。大量评估表明ALER具有卓越的效率,特别是在大规模DBLP数据集上:与最快的基线相比,它将训练循环加速了1.3倍,同时将解析延迟大幅降低了3.8倍。