Agentic systems built on large language models operate through recursive feedback loops, where each output becomes the next input. Yet the geometric behavior of these agentic loops (whether they converge, diverge, or exhibit more complex dynamics) remains poorly understood. This paper introduces a geometric framework for analyzing agentic trajectories in semantic embedding space, treating iterative transformations as discrete dynamical systems. We distinguish the artifact space, where linguistic transformations occur, from the embedding space, where geometric measurements are performed. Because cosine similarity is biased by embedding anisotropy, we introduce an isotonic calibration that eliminates systematic bias and aligns similarities with human semantic judgments while preserving high local stability. This enables rigorous measurement of trajectories, clusters and attractors. Through controlled experiments on singular agentic loops, we identify two fundamental regimes. A contractive rewriting loop converges toward a stable attractor with decreasing dispersion, while an exploratory summarize and negate loop produces unbounded divergence with no cluster formation. These regimes display qualitatively distinct geometric signatures of contraction and expansion. Our results show that prompt design directly governs the dynamical regime of an agentic loop, enabling systematic control of convergence, divergence and trajectory structure in iterative LLM transformations.
翻译:基于大语言模型构建的代理系统通过递归反馈循环运行,其中每个输出成为下一个输入。然而,这些代理循环的几何行为(无论它们是收敛、发散还是表现出更复杂的动力学特性)仍然鲜为人知。本文引入了一个几何框架,用于分析语义嵌入空间中的代理轨迹,将迭代变换视为离散动力系统。我们区分了发生语言变换的**制品空间**与执行几何测量的**嵌入空间**。由于余弦相似度受嵌入各向异性影响而产生偏差,我们引入了一种**等张校准**方法,该方法消除了系统偏差,使相似度与人类语义判断保持一致,同时保持了高度的局部稳定性。这使得对轨迹、聚类和吸引子进行严格测量成为可能。通过对单一代理循环的受控实验,我们识别出两种基本机制。**压缩式改写循环**朝着一个稳定的吸引子收敛,且离散度递减;而**探索式总结与否定循环**则产生无界的发散,且不形成聚类。这些机制展现出收缩与扩张在性质上截然不同的几何特征。我们的研究结果表明,提示设计直接支配着代理循环的动力学机制,从而能够系统性地控制迭代式大语言模型变换中的收敛性、发散性及轨迹结构。