Text-based misinformation permeates online discourses, yet evidence of people's ability to discern truth from such deceptive textual content is scarce. We analyze a novel TV game show data where conversations in a high-stake environment between individuals with conflicting objectives result in lies. We investigate the manifestation of potentially verifiable language cues of deception in the presence of objective truth, a distinguishing feature absent in previous text-based deception datasets. We show that there exists a class of detectors (algorithms) that have similar truth detection performance compared to human subjects, even when the former accesses only the language cues while the latter engages in conversations with complete access to all potential sources of cues (language and audio-visual). Our model, built on a large language model, employs a bottleneck framework to learn discernible cues to determine truth, an act of reasoning in which human subjects often perform poorly, even with incentives. Our model detects novel but accurate language cues in many cases where humans failed to detect deception, opening up the possibility of humans collaborating with algorithms and ameliorating their ability to detect the truth.
翻译:基于文本的虚假信息在在线讨论中泛滥,然而关于人们从这类欺骗性文本内容中辨别真相的能力证据却十分匮乏。我们分析了一个新颖的电视游戏秀数据,其中在高风险环境中,目标相互冲突的个体之间的对话产生了谎言。我们研究了在客观真相存在的情况下,潜在可验证的欺骗语言线索的表现形式,这是以往基于文本的欺骗数据集所缺乏的显著特征。我们证明存在一类检测器(算法),其真相检测性能与人类受试者相当,即使前者仅能获取语言线索,而后者参与对话时可完全获取所有潜在线索来源(语言和视听信息)。我们的模型基于大型语言模型,采用瓶颈框架学习可辨别的线索以判断真相,这是一种人类受试者即使在激励下也常常表现不佳的推理行为。在许多人类未能检测到欺骗的情况下,我们的模型检测到了新颖但准确的语言线索,这为人类与算法协作并提升其检测真相的能力开辟了可能性。