AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people's ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or deceptive statements across three conditions with varying levels of AI mediation. We observed that perceived trust and confidence in judgments declined in AI-mediated videos, particularly in settings in which some participants used avatars while others did not. However, participants' actual judgment accuracy remained unchanged, and they were no more inclined to suspect those using AI tools of lying. Our findings provide evidence against concerns that AI mediation undermines people's ability to distinguish truth from lies, and against cue-based accounts of lie detection more generally. They highlight the importance of trustworthy AI mediation tools in contexts where not only truth, but also trust and confidence matter.
翻译:基于人工智能的工具对视频通信进行中介、增强或生成时,可能干扰人们对可信度与可靠性的评估。通过两项预注册在线实验(N=2,000),我们考察了AI介导的视频修饰、背景替换及虚拟化身对人际信任、个体谎言识别能力及判断信心的影响。参与者在三种不同AI介入程度的实验条件下观看视频,视频中说话者做出真实或欺骗性陈述。我们发现,在AI介导的视频中,感知信任与判断信心显著下降,尤其在某些参与者使用虚拟化身而另一些未使用的设置中更为明显。然而,参与者的实际判断准确率并未改变,且他们并未更倾向于怀疑使用AI工具的说话者在撒谎。我们的研究结果反驳了“AI中介会削弱个体区分真相与谎言能力”的担忧,并普遍质疑基于行为线索的谎言识别理论。这些发现凸显了在不仅关乎真相、更涉及信任与信心的情境中,开发值得信赖的AI中介工具的重要性。