Recent studies have demonstrated that large language models (LLMs) are susceptible to being misled by false premise questions (FPQs), leading to errors in factual knowledge, know as factuality hallucination. Existing benchmarks that assess this vulnerability primarily rely on manual construction, resulting in limited scale and lack of scalability. In this work, we introduce an automated, scalable pipeline to create FPQs based on knowledge graphs (KGs). The first step is modifying true triplets extracted from KGs to create false premises. Subsequently, utilizing the state-of-the-art capabilities of GPTs, we generate semantically rich FPQs. Based on the proposed method, we present a comprehensive benchmark, the Knowledge Graph-based False Premise Questions (KG-FPQ), which contains approximately 178k FPQs across three knowledge domains, at six levels of confusability, and in two task formats. Using KG-FPQ, we conduct extensive evaluations on several representative LLMs and provide valuable insights. The KG-FPQ dataset and code are available at~https://github.com/yanxuzhu/KG-FPQ.
翻译:近期研究表明,大语言模型(LLMs)容易受到错误前提问题(FPQs)的误导,导致事实性知识错误,即所谓的事实性幻觉。现有评估该脆弱性的基准主要依赖人工构建,导致规模有限且缺乏可扩展性。本研究提出一种基于知识图谱(KGs)自动生成FPQs的可扩展流程。首先通过修改从知识图谱中提取的真实三元组来构建错误前提,随后利用GPT系列模型的最先进能力生成语义丰富的错误前提问题。基于该方法,我们构建了一个综合性基准——基于知识图谱的错误前提问题集(KG-FPQ),该数据集涵盖三个知识领域、六个混淆难度级别及两种任务形式,包含约17.8万个错误前提问题。借助KG-FPQ,我们对多个代表性大语言模型进行了广泛评估,并提供了有价值的洞见。KG-FPQ数据集与代码已公开于~https://github.com/yanxuzhu/KG-FPQ。