Ontologies provide formal representation of knowledge shared within Semantic Web applications. Ontology learning involves the construction of ontologies from a given corpus. In the past years, ontology learning has traversed through shallow learning and deep learning methodologies, each offering distinct advantages and limitations in the quest for knowledge extraction and representation. A new trend of these approaches is relying on large language models (LLMs) to enhance ontology learning. This paper gives a review in approaches and challenges of ontology learning. It analyzes the methodologies and limitations of shallow-learning-based and deep-learning-based techniques for ontology learning, and provides comprehensive knowledge for the frontier work of using LLMs to enhance ontology learning. In addition, it proposes several noteworthy future directions for further exploration into the integration of LLMs with ontology learning tasks.
翻译:本体为语义网应用中的共享知识提供了形式化表示。本体学习涉及从给定语料库构建本体。过去数年间,本体学习经历了从浅层学习到深度学习方法的演进,每种方法在知识抽取与表示方面皆具有独特的优势与局限。当前的新趋势是依托大语言模型(LLMs)以增强本体学习。本文系统评述了本体学习的方法体系与核心挑战,深入剖析了基于浅层学习与深度学习的技术路径及其局限性,并为利用LLMs增强本体学习的前沿工作提供了全面的知识梳理。此外,本文针对LLMs与本体学习任务的融合提出了若干值得关注的未来研究方向。