It is commonly assumed that trust increases cooperation. However, game-theoretic models often fail to distinguish between cooperative actions and trust, making it difficult to independently measure trust and determine how its effects vary in different social dilemmas. To address this, we build on influential theories that equate trust with reduced monitoring of an agent's actions. We implement this as a heuristic that cognitively bounded agents can use in repeated games to avoid spending time and effort always monitoring their partner. Agents using this heuristic reduce monitoring of a partner's actions once a threshold level of cooperativeness has been observed -- providing a measurable and architecture-agnostic definition of trust. Using evolutionary game theory, we systematically analyse the success of strategies that use this trust heuristic across the entire space of two-player symmetric social dilemma games. We demonstrate that trust-as-reduced-monitoring facilitates cooperation in two different ways. First, when monitoring is costly, trust heuristics allow for higher levels of cooperation in social dilemmas where the temptation to defect is high. Second, when agents can make action errors, trust heuristics promote cooperation even in coordination problems. Our results disentangle trust from cooperation, and provide a behavioural measure of trust that applies across interaction types.
翻译:通常认为信任能促进合作。然而,博弈论模型往往未能区分合作行为与信任本身,导致难以独立测量信任并确定其在不同社会困境中的效应差异。为解决这一问题,我们基于将信任等同于对智能体行为监控减少的重要理论框架展开研究。我们将此实现为一种启发式策略,使认知受限的智能体在重复博弈中可避免持续耗费时间与精力监控对手。采用该启发式的智能体在观察到合作水平达到阈值后,会减少对对手行为的监控——这为信任提供了一个可量化且与架构无关的定义。运用演化博弈论,我们系统分析了采用该信任启发式的策略在全部二人对称社会困境博弈空间中的成功条件。研究表明,作为监控减少的信任通过两种不同机制促进合作:首先,当监控行为存在成本时,信任启发式能在背叛诱惑较高的社会困境中实现更高水平的合作;其次,当智能体可能出现行为误差时,信任启发式即使在协调问题中也能促进合作。我们的研究结果实现了信任与合作的解耦,并提供了一种适用于多种交互类型的信任行为测量方法。