As large language models (LLMs) are increasingly deployed as interactive agents, open-ended human-AI interactions can involve deceptive behaviors with serious real-world consequences, yet existing evaluations remain largely scenario-specific and model-centric. We introduce OpenDeception, a lightweight framework for jointly evaluating deception risk from both sides of human-AI dialogue. It consists of a scenario benchmark with 50 real-world deception cases, an IntentNet that infers deceptive intent from agent reasoning, and a TrustNet that estimates user susceptibility. To address data scarcity, we synthesize high-risk dialogues via LLM-based role-and-goal simulation, and train the User Trust Scorer using contrastive learning on controlled response pairs, avoiding unreliable scalar labels. Experiments on 11 LLMs and three large reasoning models show that over 90% of goal-driven interactions in most models exhibit deceptive intent, with stronger models displaying higher risk. A real-world case study adapted from a documented AI-induced suicide incident further demonstrates that our joint evaluation can proactively trigger warnings before critical trust thresholds are reached.
翻译:随着大语言模型(LLM)日益作为交互式智能体被部署,开放式人机交互可能涉及具有严重现实后果的欺骗行为,然而现有评估方法仍主要局限于特定场景且以模型为中心。我们提出OpenDeception——一个用于从人机对话双方联合评估欺骗风险的轻量级框架。该框架包含:涵盖50个现实世界欺骗案例的场景基准、从智能体推理中推断欺骗意图的IntentNet,以及评估用户易感性的TrustNet。为解决数据稀缺问题,我们通过基于LLM的角色-目标模拟合成高风险对话,并采用对比学习方法在受控响应对上训练用户信任评分器,从而避免使用不可靠的标量标签。在11个LLM和三个大型推理模型上的实验表明,大多数模型中超过90%的目标驱动交互表现出欺骗意图,且性能更强的模型显示出更高风险。一项改编自已记录AI诱导自杀事件的真实案例研究进一步证明,我们的联合评估能在关键信任阈值被突破前主动触发预警。