There are many methods and systems to tackle the ontology alignment problem, yet a major challenge persists in producing high-quality mappings among a set of input ontologies. Adopting a human-in-the-loop approach during the alignment process has become essential in applications requiring very accurate mappings. However, user involvement is expensive when dealing with large ontologies. In this paper, we analyse the feasibility of using Large Language Models (LLM) to aid the ontology alignment problem. LLMs are used only in the validation of a subset of correspondences for which there is high uncertainty. We have conducted an extensive analysis over several tasks of the Ontology Alignment Evaluation Initiative (OAEI), reporting in this paper the performance of several state-of-the-art LLMs using different prompt templates. Using LLMs as Oracles resulted in strong performance in the OAEI 2025, achieving the top-2 overall rank in the bio-ml track.
翻译:目前存在多种方法与系统以应对本体对齐问题,然而在生成一组输入本体间的高质量映射方面仍面临重大挑战。在对齐过程中采用人在回路方法,对于需要极高精度映射的应用场景已变得至关重要。然而,在处理大规模本体时,用户参与的成本十分高昂。本文分析了利用大型语言模型辅助解决本体对齐问题的可行性。LLM仅用于验证具有高度不确定性的部分对应关系。我们在本体对齐评估倡议的多个任务上进行了广泛分析,本文报告了使用不同提示模板的多种前沿LLM的性能表现。将LLM用作预言机在OAEI 2025中取得了优异表现,在生物医学机器学习赛道获得了总排名第二的成绩。