Human users increasingly communicate with large language models (LLMs), but LLMs suffer from frequent overconfidence in their output, even when its accuracy is questionable, which undermines their trustworthiness and perceived legitimacy. Therefore, there is a need for language models to signal their confidence in order to reap the benefits of human-machine collaboration and mitigate potential harms. Verbalized uncertainty is the expression of confidence with linguistic means, an approach that integrates perfectly into language-based interfaces. Most recent research in natural language processing (NLP) overlooks the nuances surrounding human uncertainty communication and the biases that influence the communication of and with machines. We argue for anthropomimetic uncertainty, the principle that intuitive and trustworthy uncertainty communication requires a degree of imitation of human linguistic behaviors. We present a thorough overview of the research in human uncertainty communication, survey ongoing research in NLP, and perform additional analyses to demonstrate so-far underexplored biases in verbalized uncertainty. We conclude by pointing out unique factors in human-machine uncertainty and outlining future research directions towards implementing anthropomimetic uncertainty.
翻译:随着人类用户越来越多地与大型语言模型(LLM)进行交流,LLM 经常表现出对其输出的过度自信,即使其准确性存疑,这削弱了其可信度和感知合法性。因此,语言模型需要能够表达其置信度,以便获得人机协作的益处并减轻潜在危害。言语化不确定性是指通过语言手段表达置信度,这种方法能完美地融入基于语言的交互界面。自然语言处理(NLP)领域的最新研究大多忽略了人类不确定性沟通的细微差别,以及影响与机器沟通及机器自身沟通的偏见。我们主张拟人化不确定性原则,即直观且可信的不确定性沟通需要在一定程度上模仿人类的语言行为。我们全面概述了人类不确定性沟通的研究,调研了 NLP 领域的当前研究,并通过额外分析揭示了言语化不确定性中迄今尚未被充分探索的偏见。最后,我们指出了人机不确定性中的独特因素,并概述了实现拟人化不确定性的未来研究方向。