As foundation models (FMs) approach human-level fluency, distinguishing synthetic from organic content has become a key challenge for Trustworthy Web Intelligence. This paper presents JudgeGPT and RogueGPT, a dual-axis framework that decouples "authenticity" from "attribution" to investigate the mechanisms of human susceptibility. Analyzing 918 evaluations across five FMs (including GPT-4 and Llama-2), we employ Structural Causal Models (SCMs) as a principal framework for formulating testable causal hypotheses about detection accuracy. Contrary to partisan narratives, we find that political orientation shows a negligible association with detection performance ($r=-0.10$). Instead, "fake news familiarity" emerges as a candidate mediator ($r=0.35$), suggesting that exposure may function as adversarial training for human discriminators. We identify a "fluency trap" where GPT-4 outputs (HumanMachineScore: 0.20) bypass Source Monitoring mechanisms, rendering them indistinguishable from human text. These findings suggest that "pre-bunking" interventions should target cognitive source monitoring rather than demographic segmentation to ensure trustworthy information ecosystems.
翻译:随着基础模型(FMs)接近人类水平的流畅度,区分合成内容与有机内容已成为可信网络智能的关键挑战。本文提出JudgeGPT与RogueGPT这一双轴框架,通过解耦“真实性”与“归因”来探究人类易感性的机制。通过分析涵盖五个基础模型(包括GPT-4与Llama-2)的918项评估,我们采用结构因果模型(SCMs)作为构建可检验因果假设的主要框架。与党派叙事相反,我们发现政治倾向与检测性能的关联可忽略不计($r=-0.10$)。相反,“虚假新闻熟悉度”显现为潜在中介变量($r=0.35$),表明接触虚假信息可能充当人类判别者的对抗性训练。我们识别出一种“流畅性陷阱”:GPT-4输出(HumanMachineScore: 0.20)能绕过来源监控机制,使其与人类文本无法区分。这些发现表明,“预先揭穿”干预措施应针对认知来源监控而非人口统计细分,以确保可信的信息生态系统。