It is commonly assumed that trust increases cooperation. However, game-theoretic models often fail to distinguish between cooperative actions and trust, making it difficult to independently measure trust and determine how its effects vary in different social dilemmas. To address this, we build on influential theories that equate trust with reduced monitoring of an agent's actions. We implement this as a heuristic that cognitively bounded agents can use in repeated games to avoid spending time and effort always monitoring their partner. Agents using this heuristic reduce monitoring of a partner's actions once a threshold level of cooperativeness has been observed -- providing a measurable and architecture-agnostic definition of trust. Using evolutionary game theory, we systematically analyse the success of strategies that use this trust heuristic across the entire space of two-player symmetric social dilemma games. We demonstrate that trust-as-reduced-monitoring facilitates cooperation in two different ways. First, when monitoring is costly, trust heuristics allow for higher levels of cooperation in social dilemmas where the temptation to defect is high. Second, when agents can make action errors, trust heuristics promote cooperation even in coordination problems. Our results disentangle trust from cooperation, and provide a behavioural measure of trust that applies across interaction types.
翻译:人们通常认为信任能够促进合作。然而,博弈论模型往往未能区分合作行为与信任本身,这使得独立测量信任并确定其在不同社会困境中的效应变化变得困难。为解决这一问题,我们基于将信任等同于对智能体行为监控减少这一具有影响力的理论展开研究。我们将其实现为一种启发式策略,使认知有限的智能体在重复博弈中能够避免持续耗费时间和精力监控其合作伙伴。采用该启发式策略的智能体一旦观察到合作伙伴达到特定合作水平阈值,便会减少对其行为的监控——这为信任提供了一个可测量且与架构无关的定义。运用演化博弈论,我们系统分析了在全部二人对称社会困境博弈空间中采用此信任启发式策略的成功率。我们证明,作为减少监控的信任通过两种不同方式促进合作:首先,当监控成本高昂时,信任启发式策略能够在背叛诱惑较高的社会困境中实现更高水平的合作;其次,当智能体可能出现行为误差时,信任启发式策略甚至在协调问题中也能促进合作。我们的研究结果将信任从合作中解构出来,并提供了一种适用于各类互动场景的信任行为测量方法。