Motivated by the fact that the worth of a coalition may depend on the order in which agents arrive, Nowak and Radzik (1994) (NR) introduced cooperative games with generalized characteristic functions. We study such temporal cooperative games (TCGs), where the worth function v is defined on sequences of agents π rather than sets S. This order sensitivity necessitates a re-examination of axioms for reward sharing. NR and subsequent work proposed several axioms; the resulting solution concepts are still inherently order-oblivious and closely tied to the Shapley value. In contrast, we focus on sequential solution concepts that explicitly depend on the realized order π. We study reward-sharing mechanisms satisfying incentive for optimal arrival (I4OA), which promotes orders maximizing total worth; online individual rationality (OIR), which ensures agents are not harmed by later arrivals; and sequential efficiency (SE), which requires that the worth of any sequence is fully distributed among its agents. These axioms are intrinsic to TCGs, and we characterize a class of reward-sharing mechanisms uniquely determined by them. The classical Shapley value does not directly extend to this setting. We therefore construct natural Shapley analogs in two worlds: a sequential world, where rewards are defined for each sequence agent pair, and an extended world, where rewards are defined per agent, consistent with the NR framework. In both cases, the axioms of efficiency, additivity, and null player uniquely characterize the corresponding Shapley analogs. But, these Shapley analogs are disjoint from the class of solutions satisfying the sequential axioms, even for convex and simple TCGs.
翻译:鉴于联盟的价值可能取决于智能体到达的顺序,Nowak 和 Radzik (1994) (NR) 引入了具有广义特征函数的合作博弈。我们研究这类时序合作博弈,其中价值函数 v 定义在智能体序列 π 上,而非集合 S 上。这种顺序敏感性要求重新审视奖励分配的公理。NR 及后续研究提出了若干公理;由此产生的解概念本质上仍是忽略顺序的,并与 Shapley 值紧密相关。相比之下,我们关注明确依赖于实现顺序 π 的序贯解概念。我们研究满足最优到达激励的奖励分配机制,该公理旨在促进最大化总价值的顺序;在线个体理性,确保智能体不会因后续到达者而受损;以及序贯效率,要求任何序列的价值在其智能体之间完全分配。这些公理是 TCG 所固有的,我们刻画了由它们唯一确定的一类奖励分配机制。经典的 Shapley 值不能直接推广到此设定。因此,我们在两个世界中构建了自然的 Shapley 类比:一个序贯世界,其中奖励为每个序列-智能体对定义;以及一个扩展世界,其中奖励为每个智能体定义,与 NR 框架保持一致。在两种情况下,效率性、可加性和零玩家公理唯一刻画了相应的 Shapley 类比。但是,即使对于凸和简单的 TCG,这些 Shapley 类比也不同于满足序贯公理的解类。