We develop and analyze a theoretical framework for agent-to-agent interactions in a simplified in-context linear regression setting. In our model, each agent is instantiated as a single-layer transformer with linear self-attention (LSA) trained to implement gradient-descent-like updates on a quadratic regression objective from in-context examples. We then study the coupled dynamics when two such LSA agents alternately update from each other's outputs under potentially misaligned fixed objectives. Within this framework, we characterize the generation dynamics and show that misalignment leads to a biased equilibrium where neither agent reaches its target, with residual errors predictable from the objective gap and the prompt-induced geometry. We also characterize an adversarial regime where asymmetric convergence is possible: one agent reaches its objective exactly while inducing persistent bias in the other. We further contrast this fixed objective regime with an adaptive multi-agent setting, wherein a helper agent updates a turn-based objective to implement a Newton-like step for the main agent, eliminating the plateau and accelerating its convergence. Experiments with trained LSA agents, as well as black-box GPT-5-mini runs on in-context linear regression tasks, are consistent with our theoretical predictions within this simplified setting. We view our framework as a mechanistic framework that links prompt geometry and objective misalignment to stability, bias, and robustness, and as a stepping stone toward analyzing more realistic multi-agent LLM systems.
翻译:我们在简化的上下文线性回归场景中建立并分析了一个智能体间交互的理论框架。在我们的模型中,每个智能体被实例化为一个具有线性自注意力机制的单层Transformer,该模型经过训练,能够根据上下文示例在二次回归目标上执行类似梯度下降的更新。随后,我们研究了当两个这样的线性自注意力智能体在目标可能错位且固定的情况下,交替根据对方输出进行更新时的耦合动力学。在此框架内,我们刻画了生成动态,并证明目标错位会导致一个有偏均衡,使得任一智能体均无法达到其目标,其残差误差可根据目标差距及提示诱导的几何结构进行预测。我们还刻画了一种对抗性机制,其中可能出现非对称收敛:一个智能体精确达到其目标,同时在另一个智能体中诱发持续偏差。我们进一步将这种固定目标机制与自适应多智能体设置进行对比,在后一设置中,辅助智能体通过更新回合制目标来为主智能体执行类牛顿法步骤,从而消除平台期并加速其收敛。在线性自注意力智能体的训练实验,以及在上下文线性回归任务上对黑盒GPT-5-mini进行的运行实验,均与我们在该简化设定下的理论预测一致。我们将本框架视为一个机制性框架,它建立了提示几何结构、目标错位与稳定性、偏差及鲁棒性之间的关联,并可作为分析更现实的多智能体大语言模型系统的基石。