The widespread adoption and transformative effects of large language models (LLMs) have sparked concerns regarding their capacity to produce inaccurate and fictitious content, referred to as `hallucinations'. Given the potential risks associated with hallucinations, humans should be able to identify them. This research aims to understand the human perception of LLM hallucinations by systematically varying the degree of hallucination (genuine, minor hallucination, major hallucination) and examining its interaction with warning (i.e., a warning of potential inaccuracies: absent vs. present). Participants (N=419) from Prolific rated the perceived accuracy and engaged with content (e.g., like, dislike, share) in a Q/A format. Participants ranked content as truthful in the order of genuine, minor hallucination, and major hallucination, and user engagement behaviors mirrored this pattern. More importantly, we observed that warning improved the detection of hallucination without significantly affecting the perceived truthfulness of genuine content. We conclude by offering insights for future tools to aid human detection of hallucinations. All survey materials, demographic questions, and post-session questions are available at: https://github.com/MahjabinNahar/fakes-of-varying-shades-survey-materials
翻译:大型语言模型(LLM)的广泛采用及其变革性影响引发了人们对其产生不准确和虚构内容(即“幻觉”)能力的担忧。鉴于幻觉可能带来的风险,人类应具备识别它们的能力。本研究旨在通过系统性地改变幻觉程度(真实内容、轻微幻觉、严重幻觉)并考察其与警告(即关于潜在不准确性的警告:无警告 vs. 有警告)的交互作用,来理解人类对LLM幻觉的感知。来自Prolific平台的参与者(N=419)以问答形式对内容的感知准确性进行评分,并与之互动(例如点赞、点踩、分享)。参与者将内容按真实性排序为:真实内容、轻微幻觉、严重幻觉,且用户参与行为也遵循这一模式。更重要的是,我们观察到警告提高了对幻觉的检测能力,而并未显著影响对真实内容真实性的感知。最后,我们为未来辅助人类检测幻觉的工具提供了见解。所有调查材料、人口统计问题及会话后问题均可在以下网址获取:https://github.com/MahjabinNahar/fakes-of-varying-shades-survey-materials