Large Language Models and commercial speech synthesis systems now enable highly realistic AI-generated voice scams (vishing), raising urgent concerns about deception at scale. Yet it remains unclear whether individuals can reliably distinguish AI-generated speech from human-recorded voices in realistic scam contexts and what perceptual strategies underlie their judgments. We conducted a controlled online study in which 22 participants evaluated 16 vishing-style audio clips (8 AI-generated, 8 human-recorded) and classified each as human or AI while reporting confidence. Participants performed poorly: mean accuracy was 37.5%, below chance in a binary classification task. At the stimulus level, misclassification was bidirectional: 75% of AI-generated clips were majority-labeled as human, while 62.5% of human-recorded clips were majority-labeled as AI. Signal Detection Theory analysis revealed near-zero discriminability (d' approx 0), indicating inability to reliably distinguish synthetic from human voices rather than simple response bias. Qualitative analysis of 315 coded excerpts revealed reliance on paralinguistic and emotional heuristics, including pauses, filler words, vocal variability, cadence, and emotional expressiveness. However, these surface-level cues traditionally associated with human authenticity were frequently replicated by AI-generated samples. Misclassifications were often accompanied by moderate to high confidence, suggesting perceptual miscalibration rather than uncertainty. Together, our findings demonstrate that authenticity judgments based on vocal heuristics are unreliable in contemporary vishing scenarios. We discuss implications for security interventions, user education, and AI-mediated deception mitigation.
翻译:大型语言模型与商用语音合成系统现已能够生成高度逼真的人工智能语音诈骗(语音钓鱼),引发了大规模欺骗的紧迫担忧。然而,在真实的诈骗情境中,个体能否可靠地区分AI生成的语音与真人录音,以及其判断背后的感知策略是什么,目前仍不明确。我们开展了一项受控在线研究,22名参与者评估了16段语音钓鱼风格音频片段(8段AI生成,8段真人录制),并对每段音频进行"人类"或"AI"分类,同时报告置信度。参与者表现不佳:平均准确率仅为37.5%,低于二分类任务的随机水平。在刺激层面,误判呈现双向性:75%的AI生成片段被多数参与者标记为人类,而62.5%的真人录制片段被多数标记为AI。信号检测理论分析显示可辨别性接近零(d'约等于0),表明参与者无法可靠区分合成语音与人类语音,而非简单的反应偏差。对315条编码摘录的定性分析揭示了参与者对副语言和情感启发式的依赖,包括停顿、填充词、声音变化、节奏和情感表达力。然而,这些传统上与人类真实性相关的表层线索常被AI生成样本复现。误判常伴随中等到高度的置信度,表明存在感知校准偏差而非不确定性。综合而言,我们的研究结果表明,在当代语音钓鱼场景中,基于声音启发式的真实性判断并不可靠。我们讨论了其对安全干预、用户教育和AI介导欺骗缓解的启示。