Recent advances in language models opened new opportunities to address complex schema matching tasks. Schema matching approaches have been proposed that demonstrate the usefulness of language models, but they have also uncovered important limitations: Small language models (SLMs) require training data (which can be both expensive and challenging to obtain), and large language models (LLMs) often incur high computational costs and must deal with constraints imposed by context windows. We present Magneto, a cost-effective and accurate solution for schema matching that combines the advantages of SLMs and LLMs to address their limitations. By structuring the schema matching pipeline in two phases, retrieval and reranking, Magneto can use computationally efficient SLM-based strategies to derive candidate matches which can then be reranked by LLMs, thus making it possible to reduce runtime without compromising matching accuracy. We propose a self-supervised approach to fine-tune SLMs which uses LLMs to generate syntactically diverse training data, and prompting strategies that are effective for reranking. We also introduce a new benchmark, developed in collaboration with domain experts, which includes real biomedical datasets and presents new challenges to schema matching methods. Through a detailed experimental evaluation, using both our new and existing benchmarks, we show that Magneto is scalable and attains high accuracy for datasets from different domains.
翻译:语言模型的最新进展为解决复杂模式匹配任务带来了新的机遇。已有研究提出了利用语言模型的方法,这些方法证明了语言模型的有效性,但也揭示了重要局限:小型语言模型需要训练数据(其获取成本高昂且具有挑战性),而大型语言模型通常计算成本较高且受限于上下文窗口的约束。本文提出Magneto——一种经济高效且精确的模式匹配解决方案,它结合了小型语言模型和大型语言模型的优势以克服各自的局限。通过将模式匹配流程构建为检索和重排序两个阶段,Magneto能够利用计算高效的小型语言模型策略生成候选匹配,再由大型语言模型进行重排序,从而在保证匹配精度的同时显著降低运行时间。我们提出了一种基于自监督的小型语言模型微调方法,该方法利用大型语言模型生成语法多样化的训练数据,并设计了适用于重排序任务的提示策略。此外,我们与领域专家合作开发了包含真实生物医学数据集的新基准测试集,该数据集为模式匹配方法提出了新的挑战。通过使用新旧基准测试集进行的详细实验评估,我们证明Magneto具有良好的可扩展性,并在不同领域的数据库中实现了高精度匹配。