We introduce a general stochastic differential equation framework for modelling multiobjective optimization dynamics in iterative Large Language Model (LLM) interactions. Our framework captures the inherent stochasticity of LLM responses through explicit diffusion terms and reveals systematic interference patterns between competing objectives via an interference matrix formulation. We validate our theoretical framework using iterative code generation as a proof-of-concept application, analyzing 400 sessions across security, efficiency, and functionality objectives. Our results demonstrate strategy-dependent convergence behaviors with rates ranging from 0.33 to 1.29, and predictive accuracy achieving R2 = 0.74 for balanced approaches. This work proposes the feasibility of dynamical systems analysis for multi-objective LLM interactions, with code generation serving as an initial validation domain.
翻译:我们提出了一种通用的随机微分方程框架,用于建模迭代式大语言模型(LLM)交互中的多目标优化动力学。该框架通过显式的扩散项捕捉LLM响应固有的随机性,并通过干扰矩阵公式揭示了竞争目标之间的系统性干扰模式。我们以迭代代码生成作为概念验证应用,对涵盖安全性、效率和功能性目标的400个会话进行分析,从而验证了我们的理论框架。我们的结果表明了策略依赖的收敛行为,其收敛速率介于0.33至1.29之间,且平衡方法的预测精度达到了R² = 0.74。这项工作论证了对多目标LLM交互进行动力系统分析的可行性,其中代码生成领域作为初步的验证场景。