Capability ontologies are increasingly used to model functionalities of systems or machines. The creation of such ontological models with all properties and constraints of capabilities is very complex and can only be done by ontology experts. However, Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input and thus support engineers / ontology experts. Therefore, this paper investigates how LLMs can be used to create capability ontologies. We present a study with a series of experiments in which capabilities with varying complexities are generated using different prompting techniques and with different LLMs. Errors in the generated ontologies are recorded and compared. To analyze the quality of the generated ontologies, a semi-automated approach based on RDF syntax checking, OWL reasoning, and SHACL constraints is used. The results of this study are very promising because even for complex capabilities, the generated ontologies are almost free of errors.
翻译:能力本体日益被用于建模系统或机器的功能。创建这种包含能力所有属性与约束的本体模型极为复杂,且仅能由本体专家完成。然而,大型语言模型(LLMs)已展现出从自然语言文本输入生成机器可解释模型的能力,从而可为工程师/本体专家提供支持。为此,本文探究如何利用LLMs创建能力本体。我们通过一系列实验展开研究:采用不同的提示技术及多种LLMs生成具有不同复杂度的能力描述,记录并比较生成本体中的错误。为分析生成本体的质量,采用了一种基于RDF语法检查、OWL推理与SHACL约束的半自动化评估方法。本研究结果极具前景,即使对于复杂能力,生成的本体也几乎不存在错误。