With the widespread application of Large Language Models (LLMs) to various domains, concerns regarding the trustworthiness of LLMs in safety-critical scenarios have been raised, due to their unpredictable tendency to hallucinate and generate misinformation. Existing LLMs do not have an inherent functionality to provide the users with an uncertainty/confidence metric for each response it generates, making it difficult to evaluate trustworthiness. Although several studies aim to develop uncertainty quantification methods for LLMs, they have fundamental limitations, such as being restricted to classification tasks, requiring additional training and data, considering only lexical instead of semantic information, and being prompt-wise but not response-wise. A new framework is proposed in this paper to address these issues. Semantic density extracts uncertainty/confidence information for each response from a probability distribution perspective in semantic space. It has no restriction on task types and is "off-the-shelf" for new models and tasks. Experiments on seven state-of-the-art LLMs, including the latest Llama 3 and Mixtral-8x22B models, on four free-form question-answering benchmarks demonstrate the superior performance and robustness of semantic density compared to prior approaches.
翻译:随着大语言模型(LLM)在各领域的广泛应用,由于其易产生幻觉和错误信息的不可预测倾向,其在安全关键场景中的可信度问题日益受到关注。现有LLM缺乏为其生成的每个响应提供不确定性/置信度度量的内在功能,这使得评估其可信度变得困难。尽管已有研究致力于开发LLM的不确定性量化方法,但这些方法存在根本性局限,例如仅限于分类任务、需要额外训练和数据、仅考虑词汇层面而非语义信息、以及仅针对提示而非响应进行度量。本文提出了一种新框架以解决这些问题。语义密度从语义空间的概率分布视角提取每个响应的不确定性/置信度信息。该方法对任务类型无限制,且对新模型和任务具有“即用性”。在包含最新Llama 3和Mixtral-8x22B模型在内的七种前沿LLM上,基于四个自由形式问答基准的实验表明,与现有方法相比,语义密度具有更优越的性能和鲁棒性。