Large Language Models (LLMs) are central to reasoning, writing, and decision-support workflows, yet users lack consistent control over how they reason and express outputs. Conventional prompt engineering relies on verbose natural-language instructions, limiting reproducibility, modularity, and interpretability. This paper introduces Prompt Decorators, a declarative, composable syntax that governs LLM behavior through compact control tokens such as +++Reasoning, +++Tone(style=formal), and +++Import(topic="Systems Thinking"). Each decorator modifies a behavioral dimension, such as reasoning style, structure, or tone, without changing task content. The framework formalizes twenty core decorators organized into two functional families (Cognitive & Generative and Expressive & Systemic), each further decomposed into subcategories that govern reasoning, interaction, expression, and session-control. It defines a unified syntax, scoping model, and deterministic processing pipeline enabling predictable and auditable behavior composition. By decoupling task intent from execution behavior, Prompt Decorators create a reusable and interpretable interface for prompt design. Illustrative use cases demonstrate improved reasoning transparency, reduced prompt complexity, and standardized model behavior across domains. The paper concludes with implications for interoperability, behavioral consistency, and the development of declarative interfaces for scalable AI systems.
翻译:大语言模型(LLM)已成为推理、写作和决策支持工作流程的核心组件,但用户对其推理过程和输出表达方式仍缺乏一致的控制能力。传统的提示工程依赖冗长的自然语言指令,限制了可复现性、模块化程度和可解释性。本文提出提示装饰器——一种通过紧凑控制令牌(如+++推理、+++语调(风格=正式)和+++导入(主题="系统思维"))来调控LLM行为的声明式可组合语法。每个装饰器可独立调整推理风格、输出结构或表达语气等行为维度,而无需改变任务内容本身。该框架形式化定义了二十个核心装饰器,划分为两大功能族系(认知与生成、表达与系统),每个族系进一步细分为控制推理、交互、表达和会话管理的子类别。我们建立了统一的语法规范、作用域模型和确定性处理流程,确保行为组合的可预测性与可审计性。通过将任务意图与执行行为解耦,提示装饰器构建了可复用且可解释的提示设计接口。典型案例研究表明,该方法能显著提升推理透明度、降低提示复杂度,并实现跨领域模型行为的标准化。本文最后探讨了该框架对系统互操作性、行为一致性以及可扩展人工智能系统声明式接口发展的启示。