As language models are deployed as autonomous agents that negotiate, cooperate, and compete on behalf of human principals, their strategic dispositions acquire direct economic consequences. Here we show, across 51,906 game-theoretic trials generating 826,990 strategic decisions from 25 large language models spanning seven developers and 38 canonical games, that models converge on competitive and coordination behaviour (coefficient of variation 0.06 for coordination, 0.11 for strategic depth) while diverging 48-fold on cooperation, from 1.5 per cent (GPT-5 Nano) to 71.5 per cent (Claude Opus 4.6). Provider identity is the dominant predictor of cooperative disposition, and this divergence is generationally unstable: OpenAI cooperation fell from 50.3 to 1.5 per cent across four model generations while Google cooperation rose from 8.3 to 56.8 per cent. Endgame analysis reveals that Anthropic frontier models sustain 57 per cent cooperation in the final round of finitely repeated games, where backward induction predicts zero, while the newest Google models cooperate throughout but universally defect when punishment becomes impossible. These strategic personalities are shaped by training pipelines, shift unpredictably across model versions, and cannot be inferred from capability benchmarks, yet they determine the cooperative outcomes of every economic interaction these models mediate. The complete dataset and an interactive explorer for the data are publicly available at https://felipemaffonso.github.io/strategic-personalities/.
翻译:暂无翻译