This work is about estimating the hallucination rate for in-context learning (ICL) with Generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and asked to make a prediction based on that dataset. The Bayesian interpretation of ICL assumes that the CGM is calculating a posterior predictive distribution over an unknown Bayesian model of a latent parameter and data. With this perspective, we define a \textit{hallucination} as a generated prediction that has low-probability under the true latent parameter. We develop a new method that takes an ICL problem -- that is, a CGM, a dataset, and a prediction question -- and estimates the probability that a CGM will generate a hallucination. Our method only requires generating queries and responses from the model and evaluating its response log probability. We empirically evaluate our method on synthetic regression and natural language ICL tasks using large language models.
翻译:本研究旨在估计生成式人工智能在上下文学习中的幻觉率。在上下文学习中,条件生成模型基于给定数据集进行提示,并依据该数据集做出预测。上下文学习的贝叶斯解释假定条件生成模型正在计算基于潜在参数与数据的未知贝叶斯模型的后验预测分布。基于这一视角,我们将“幻觉”定义为在真实潜在参数下出现概率较低的生成预测。本文开发了一种新方法,该方法接收上下文学习问题(即条件生成模型、数据集及预测问题),并估计条件生成模型产生幻觉的概率。我们的方法仅需从模型生成查询与响应,并评估其响应对数概率。我们使用大语言模型在合成回归与自然语言上下文学习任务上对所提方法进行了实证评估。