Prompts can switch a model's behavior even when the weights are fixed, yet this phenomenon is rarely treated as a clean theoretical object rather than a heuristic. We study the family of functions obtainable by holding a Transformer backbone fixed as an executor and varying only the prompt. Our core idea is to view the prompt as an externally injected program and to construct a simplified Transformer that interprets it to implement different computations. The construction exposes a mechanism-level decomposition: attention performs selective routing from prompt memory, the FFN performs local arithmetic conditioned on retrieved fragments, and depth-wise stacking composes these local updates into a multi-step computation. Under this viewpoint, we prove a constructive existential result showing that a single fixed backbone can approximate a broad class of target behaviors via prompts alone. The framework provides a unified starting point for formalizing trade-offs under prompt length/precision constraints and for studying structural limits of prompt-based switching, while remaining distinct from empirical claims about pretrained LLMs.
翻译:提示可以在模型权重固定的情况下切换其行为,然而这一现象通常被视为启发式方法而非清晰的理论对象。本研究探讨了在保持Transformer骨干网络作为执行器固定的情况下,仅通过改变提示所能获得的功能族。我们的核心思想是将提示视为外部注入的程序,并构建一个简化的Transformer模型来解析提示以实现不同的计算。该构造揭示了机制层面的分解:注意力机制执行从提示记忆中选择性路由,前馈网络基于检索片段执行局部算术运算,而深度堆叠将这些局部更新组合成多步计算。在此视角下,我们通过构造性存在性证明表明:单个固定骨干网络仅通过提示即可近似实现广泛的目标行为类别。该框架为形式化提示长度/精度约束下的权衡关系、研究基于提示切换的结构性限制提供了统一的起点,同时与关于预训练大语言模型的实证主张保持区分。