Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks, making their trustworthiness critical. A significant and underexplored risk is intentional deception, where an LLM deliberately fabricates or conceals information to serve a hidden objective. Existing studies typically induce deception by explicitly setting a hidden objective through prompting or fine-tuning, which may not reflect real-world human-LLM interactions. Moving beyond such human-induced deception, we investigate LLMs' self-initiated deception on benign prompts. To address the absence of ground truth, we propose a framework based on Contact Searching Questions~(CSQ). This framework introduces two statistical metrics derived from psychological principles to quantify the likelihood of deception. The first, the Deceptive Intention Score, measures the model's bias toward a hidden objective. The second, the Deceptive Behavior Score, measures the inconsistency between the LLM's internal belief and its expressed output. Evaluating 16 leading LLMs, we find that both metrics rise in parallel and escalate with task difficulty for most models. Moreover, increasing model capacity does not always reduce deception, posing a significant challenge for future LLM development.
翻译:大型语言模型(LLMs)已广泛应用于推理、规划与决策任务,其可信度至关重要。一个重要且尚未被充分探索的风险是故意欺骗行为,即LLM为服务隐藏目标而蓄意捏造或隐瞒信息。现有研究通常通过提示或微调明确设定隐藏目标来诱发欺骗,这可能无法反映真实世界中人与LLM的交互。为超越此类人为诱导的欺骗,本研究探究LLMs在良性提示下自发的欺骗行为。针对缺乏真实标注数据的问题,我们提出基于接触搜索问题(CSQ)的评估框架。该框架依据心理学原理推导出两个统计指标来量化欺骗可能性:第一项为欺骗意图分数,用于衡量模型对隐藏目标的倾向性偏差;第二项为欺骗行为分数,用于度量LLM内部信念与其表达输出之间的不一致性。通过对16个主流LLM的评估,我们发现多数模型中两项指标呈同步上升趋势,且随任务难度增加而加剧。值得注意的是,提升模型容量并不总能减少欺骗行为,这为未来LLM发展提出了重大挑战。