We study how delegating pricing to large language models (LLMs) can facilitate collusion in a duopoly when both sellers rely on the same pre-trained model. The LLM is characterized by (i) a propensity parameter capturing its internal bias toward high-price recommendations and (ii) an output-fidelity parameter measuring how tightly outputs track that bias; the propensity evolves through retraining. We show that configuring LLMs for robustness and reproducibility can induce collusion via a phase transition: there exists a critical output-fidelity threshold that pins down long-run behavior. Below it, competitive pricing is the unique long-run outcome. Above it, the system is bistable, with competitive and collusive pricing both locally stable and the realized outcome determined by the model's initial preference. The collusive regime resembles tacit collusion: prices are elevated on average, yet occasional low-price recommendations provide plausible deniability. With perfect fidelity, full collusion emerges from any interior initial condition. For finite training batches of size $b$, infrequent retraining (driven by computational costs) further amplifies collusion: conditional on starting in the collusive basin, the probability of collusion approaches one as $b$ grows, since larger batches dampen stochastic fluctuations that might otherwise tip the system toward competition. The indeterminacy region shrinks at rate $O(1/\sqrt{b})$.
翻译:我们研究当两个卖方都依赖同一预训练模型时,将定价权委托给大语言模型(LLM)如何促进双头垄断中的合谋。该LLM的特征包括:(i)一个捕捉其内部高价推荐倾向的偏好参数,以及(ii)一个衡量输出结果与该倾向紧密程度的输出保真度参数;偏好参数通过再训练进行演化。我们证明,为LLM配置稳健性和可复现性会通过相变诱发合谋:存在一个临界输出保真度阈值,该阈值决定了长期行为。低于该阈值时,竞争性定价是唯一的长期结果。高于该阈值时,系统呈现双稳态,竞争性定价与合谋定价均局部稳定,最终实现的结果由模型的初始偏好决定。合谋机制类似于默契合谋:价格平均较高,但偶尔的低价推荐提供了合理的否认空间。在完全保真度下,任何内部初始条件都会导致完全合谋。对于规模为$b$的有限训练批次,低频再训练(由计算成本驱动)会进一步放大合谋:若起始于合谋吸引域,随着$b$增大,合谋概率趋近于一,因为更大的批次会抑制可能将系统推向竞争的随机波动。不确定区域以$O(1/\sqrt{b})$的速率收缩。