As corporations rush to integrate large language models (LLMs) to their search offerings, it is critical that they provide factually accurate information that is robust to any presuppositions that a user may express. In this work, we introduce UPHILL, a dataset consisting of health-related queries with varying degrees of presuppositions. Using UPHILL, we evaluate the factual accuracy and consistency of InstructGPT, ChatGPT, and BingChat models. We find that while model responses rarely disagree with true health claims (posed as questions), they often fail to challenge false claims: responses from InstructGPT agree with 32% of the false claims, ChatGPT 26% and BingChat 23%. As we increase the extent of presupposition in input queries, the responses from InstructGPT and ChatGPT agree with the claim considerably more often, regardless of its veracity. Responses from BingChat, which rely on retrieved webpages, are not as susceptible. Given the moderate factual accuracy, and the inability of models to consistently correct false assumptions, our work calls for a careful assessment of current LLMs for use in high-stakes scenarios.
翻译:随着企业竞相将大型语言模型(LLMs)集成至其搜索服务中,确保其提供的事实信息准确无误,并能有效应对用户可能提出的任何预设前提,变得至关重要。本研究引入UPHILL数据集,该数据集包含具有不同程度预设前提的健康相关查询。利用UPHILL,我们评估了InstructGPT、ChatGPT和BingChat模型的事实准确性与一致性。研究发现,虽然模型回复很少反驳真实健康主张(以提问形式呈现),但它们往往未能纠正错误主张:InstructGPT的回复认同了32%的错误主张,ChatGPT为26%,BingChat为23%。随着输入查询中预设前提程度的增加,InstructGPT和ChatGPT的回复认同主张的频率显著上升,无论该主张真实与否。而依赖检索网页的BingChat回复则不易受此影响。鉴于现有模型的事实准确性尚属中等,且无法持续纠正错误假设,本研究呼吁对当前大型语言模型在高风险场景中的应用进行审慎评估。