Blocking is a critical step in entity resolution, and the emergence of neural network-based representation models has led to the development of dense blocking as a promising approach for exploring deep semantics in blocking. However, previous advanced self-supervised dense blocking approaches require domain-specific training on the target domain, which limits the benefits and rapid adaptation of these methods. To address this issue, we propose UniBlocker, a dense blocker that is pre-trained on a domain-independent, easily-obtainable tabular corpus using self-supervised contrastive learning. By conducting domain-independent pre-training, UniBlocker can be adapted to various downstream blocking scenarios without requiring domain-specific fine-tuning. To evaluate the universality of our entity blocker, we also construct a new benchmark covering a wide range of blocking tasks from multiple domains and scenarios. Our experiments show that the proposed UniBlocker, without any domain-specific learning, significantly outperforms previous self- and unsupervised dense blocking methods and is comparable and complementary to the state-of-the-art sparse blocking methods.
翻译:分块是实体解析中的关键步骤,基于神经网络的表示模型的出现推动了稠密分块的发展,使其成为探索分块中深层语义的一种有前景的方法。然而,以往先进的自监督稠密分块方法需要在目标领域进行领域特定的训练,这限制了这些方法的优势及快速适配能力。为解决这一问题,我们提出了UniBlocker,一种通过自监督对比学习在领域无关且易于获取的表格语料库上预训练的稠密分块器。通过进行领域无关的预训练,UniBlocker能够适应多种下游分块场景,无需领域特定的微调。为评估我们实体分块器的通用性,我们还构建了一个新的基准测试,覆盖了来自多个领域和场景的广泛分块任务。实验结果表明,所提出的UniBlocker在无需任何领域特定学习的情况下,显著优于先前的自监督和无监督稠密分块方法,并与最先进的稀疏分块方法具有可比性和互补性。