Dynamical systems theory provides a framework for analyzing iterative processes and evolution over time. Within such systems, repetitive transformations can lead to stable configurations, known as attractors, including fixed points and limit cycles. Applying this perspective to large language models (LLMs), which iteratively map input text to output text, provides a principled approach to characterizing long-term behaviors. Successive paraphrasing serves as a compelling testbed for exploring such dynamics, as paraphrases re-express the same underlying meaning with linguistic variation. Although LLMs are expected to explore a diverse set of paraphrases in the text space, our study reveals that successive paraphrasing converges to stable periodic states, such as 2-period attractor cycles, limiting linguistic diversity. This phenomenon is attributed to the self-reinforcing nature of LLMs, as they iteratively favour and amplify certain textual forms over others. This pattern persists with increasing generation randomness or alternating prompts and LLMs. These findings underscore inherent constraints in LLM generative capability, while offering a novel dynamical systems perspective for studying their expressive potential.
翻译:动力学系统理论为分析迭代过程及时序演化提供了框架。在此类系统中,重复变换可导致稳定构型,即吸引子,包括不动点与极限环。将这一视角应用于迭代映射输入文本至输出文本的大语言模型(LLMs),为刻画其长期行为提供了原则性方法。连续释义作为探索此类动态的典型测试平台,因其通过语言变异重新表达相同语义内涵。尽管预期LLMs能在文本空间中探索多样化的释义表达,但本研究发现连续释义会收敛至稳定周期状态(如2周期吸引子循环),从而限制语言多样性。该现象归因于LLMs的自强化特性——模型在迭代过程中持续偏好并放大特定文本形式。此模式在增加生成随机性或交替使用提示词与不同LLMs时依然存在。这些发现揭示了LLM生成能力的内在局限性,同时为研究其表达潜力提供了新颖的动力学系统视角。