Multi-agent systems powered by large language models are advancing rapidly, yet the tension between mutual trust and security remains underexplored. We introduce and empirically validate the Trust-Vulnerability Paradox (TVP): increasing inter-agent trust to enhance coordination simultaneously expands risks of over-exposure and over-authorization. To investigate this paradox, we construct a scenario-game dataset spanning 3 macro scenes and 19 sub-scenes, and run extensive closed-loop interactions with trust explicitly parameterized. Using Minimum Necessary Information (MNI) as the safety baseline, we propose two unified metrics: Over-Exposure Rate (OER) to detect boundary violations, and Authorization Drift (AD) to capture sensitivity to trust levels. Results across multiple model backends and orchestration frameworks reveal consistent trends: higher trust improves task success but also heightens exposure risks, with heterogeneous trust-to-risk mappings across systems. We further examine defenses such as Sensitive Information Repartitioning and Guardian-Agent enablement, both of which reduce OER and attenuate AD. Overall, this study formalizes TVP, establishes reproducible baselines with unified metrics, and demonstrates that trust must be modeled and scheduled as a first-class security variable in multi-agent system design.
翻译:基于大语言模型的多智能体系统发展迅速,但相互信任与安全之间的紧张关系仍未得到充分探索。我们提出并通过实证验证了“信任-漏洞悖论”:为增强协调性而提高智能体间信任的同时,也扩大了过度暴露和过度授权的风险。为研究此悖论,我们构建了一个涵盖3个宏观场景和19个子场景的情景博弈数据集,并运行了广泛的闭环交互,其中信任被显式参数化。以“最小必要信息”作为安全基线,我们提出了两个统一度量指标:用于检测边界违规的“过度暴露率”,以及用于捕捉对信任水平敏感性的“授权漂移”。在多个模型后端和编排框架上的结果揭示了一致的趋势:更高的信任度提高了任务成功率,但也加剧了暴露风险,且不同系统的信任-风险映射关系存在异质性。我们进一步检验了“敏感信息重划分”和“守护智能体启用”等防御措施,两者均能降低过度暴露率并减弱授权漂移。总体而言,本研究形式化了信任-漏洞悖论,使用统一度量建立了可复现的基线,并论证了在多智能体系统设计中,信任必须作为首要的安全变量进行建模与调度。