Knowledge graphs use nodes, relationships, and properties to represent arbitrarily complex data. When stored in a graph database, the Cypher query language enables efficient modeling and querying of knowledge graphs. However, using Cypher requires specialized knowledge, which can present a challenge for non-expert users. Our work Text2Cypher aims to bridge this gap by translating natural language queries into Cypher query language and extending the utility of knowledge graphs to non-technical expert users. While large language models (LLMs) can be used for this purpose, they often struggle to capture complex nuances, resulting in incomplete or incorrect outputs. Fine-tuning LLMs on domain-specific datasets has proven to be a more promising approach, but the limited availability of high-quality, publicly available Text2Cypher datasets makes this challenging. In this work, we show how we combined, cleaned and organized several publicly available datasets into a total of 44,387 instances, enabling effective fine-tuning and evaluation. Models fine-tuned on this dataset showed significant performance gains, with improvements in Google-BLEU and Exact Match scores over baseline models, highlighting the importance of high-quality datasets and fine-tuning in improving Text2Cypher performance.
翻译:知识图谱利用节点、关系与属性来表示任意复杂的数据。当存储在图形数据库中时,Cypher查询语言能够实现对知识图谱的高效建模与查询。然而,使用Cypher需要专业知识,这对非专业用户而言可能构成挑战。我们的工作Text2Cypher旨在通过将自然语言查询转换为Cypher查询语言,弥合这一鸿沟,并将知识图谱的实用性扩展至非技术专家用户。虽然大型语言模型(LLMs)可用于此目的,但它们往往难以捕捉复杂的细微差别,导致输出不完整或不正确。在特定领域数据集上对LLMs进行微调已被证明是一种更有前景的方法,但高质量、公开可用的Text2Cypher数据集有限,这使得微调面临挑战。在本工作中,我们展示了如何将多个公开可用的数据集进行整合、清理与组织,共计得到44,387个实例,从而实现了有效的微调与评估。在此数据集上微调的模型显示出显著的性能提升,在Google-BLEU和精确匹配分数上均优于基线模型,突显了高质量数据集与微调在提升Text2Cypher性能方面的重要性。