The rapid advancement of Large Language Models (LLMs) and conversational assistants necessitates dynamic, scalable, and configurable conversational datasets for training and evaluation. These datasets must accommodate diverse user interaction modes, including text and voice, each presenting unique modeling challenges. Knowledge Graphs (KGs), with their structured and evolving nature, offer an ideal foundation for current and precise knowledge. Although human-curated KG-based conversational datasets exist, they struggle to keep pace with the rapidly changing user information needs. We present ConvKGYarn, a scalable method for generating up-to-date and configurable conversational KGQA datasets. Qualitative psychometric analyses confirm our method can generate high-quality datasets rivaling a popular conversational KGQA dataset while offering it at scale and covering a wide range of human-interaction configurations. We showcase its utility by testing LLMs on diverse conversations - exploring model behavior on conversational KGQA sets with different configurations grounded in the same KG fact set. Our results highlight the ability of ConvKGYarn to improve KGQA foundations and evaluate parametric knowledge of LLMs, thus offering a robust solution to the constantly evolving landscape of conversational assistants.
翻译:大型语言模型(LLMs)与对话助手的快速发展,亟需动态、可扩展且可配置的对话数据集用于训练与评估。这些数据集必须兼容多样化的用户交互模式(包括文本与语音),每种模式均带来独特的建模挑战。知识图谱(KGs)凭借其结构化与持续演化的特性,为获取当前且精确的知识提供了理想基础。尽管存在基于人工构建知识图谱的对话数据集,但其难以跟上用户信息需求快速变化的步伐。本文提出ConvKGYarn,一种可扩展的生成最新且可配置的对话式知识图谱问答数据集的方法。定性心理测量分析证实,本方法能够生成与主流对话式知识图谱问答数据集质量相当的高质量数据集,同时实现大规模生成并覆盖广泛的人机交互配置场景。我们通过使用不同配置的对话测试大型语言模型来展示其实用性——探究模型在基于同一知识图谱事实集、但配置各异的对话式知识图谱问答集中的表现。实验结果凸显了ConvKGYarn在增强知识图谱问答基础能力与评估大型语言模型参数化知识方面的潜力,从而为持续演进的对话助手领域提供了一个稳健的解决方案。