The massive volume of online information along with the issue of misinformation has spurred active research in the automation of fact-checking. Like fact-checking by human experts, it is not enough for an automated fact-checker to just be accurate, but also be able to inform and convince the user of the validity of its predictions. This becomes viable with explainable artificial intelligence (XAI). In this work, we conduct a study of XAI fact-checkers involving 180 participants to determine how users' actions towards news and their attitudes towards explanations are affected by the XAI. Our results suggest that XAI has limited effects on users' agreement with the veracity prediction of the automated fact-checker and on their intent to share news. However, XAI nudges users towards forming uniform judgments of news veracity, thereby signaling their reliance on the explanations. We also found polarizing preferences towards XAI and raise several design considerations on them.
翻译:在线信息的海量增长以及错误信息问题推动了自动事实核查领域的活跃研究。与人类专家的事实核查类似,自动事实核查系统不仅需要具备准确性,还必须能够告知用户其预测的有效性并使其信服。可解释人工智能(XAI)为实现这一目标提供了可能。本研究通过180名参与者对XAI事实核查系统进行实验,旨在探究XAI如何影响用户对新闻的操作行为及其对解释机制的态度。结果表明:XAI对用户认同自动事实核查系统的真实性预测及其新闻分享意愿的影响有限;然而,XAI能促使用户形成对新闻真实性的统一判断,这暗示着用户对解释机制的依赖。研究同时发现用户对XAI存在两极分化的偏好倾向,并据此提出若干设计考量。