We present Universal Conditional Logic (UCL), a mathematical framework for prompt optimization that transforms prompt engineering from heuristic practice into systematic optimization. Through systematic evaluation (N=305, 11 models, 4 iterations), we demonstrate significant token reduction (29.8%, t(10)=6.36, p < 0.001, Cohen's d = 2.01) with corresponding cost savings. UCL's structural overhead function O_s(A) explains version-specific performance differences through the Over-Specification Paradox: beyond threshold S* = 0.509, additional specification degrades performance quadratically. Core mechanisms -- indicator functions (I_i in {0,1}), structural overhead (O_s = gamma * sum(ln C_k)), early binding -- are validated. Notably, optimal UCL configuration varies by model architecture -- certain models (e.g., Llama 4 Scout) require version-specific adaptations (V4.1). This work establishes UCL as a calibratable framework for efficient LLM interaction, with model-family-specific optimization as a key research direction.
翻译:本文提出通用条件逻辑(UCL),一种将提示工程从启发式实践转化为系统性优化的数学框架。通过系统评估(N=305,11个模型,4次迭代),我们证明了显著的标记减少(29.8%,t(10)=6.36,p < 0.001,Cohen's d = 2.01)及相应的成本节约。UCL的结构开销函数 O_s(A) 通过"过度指定悖论"解释了版本特定的性能差异:超过阈值 S* = 0.509 后,额外指定会以二次方式降低性能。核心机制——指示函数(I_i ∈ {0,1})、结构开销(O_s = γ * Σ(ln C_k))、早期绑定——均得到验证。值得注意的是,最优UCL配置因模型架构而异,某些模型(如Llama 4 Scout)需要版本特定的适配(V4.1)。本研究确立了UCL作为可校准的高效大语言模型交互框架,其中模型族特定的优化成为关键研究方向。