Recent generative and tool-using AI systems can surface a large volume of candidates at low marginal cost, yet only a small fraction can be checked carefully. This creates a decoder-side bottleneck: downstream decision-makers must form reliable posteriors from many public records under scarce attention. We formalize this regime via Attention-Constrained Inference (ACI), in which a cheap screening stage processes $K$ records and an expensive verification stage can follow up on at most $B$ of them. Under Bayes log-loss, we study the maximum achievable reduction in posterior uncertainty per window, which we call \emph{epistemic throughput}. Our main result is a ``JaKoB'' scaling law showing that epistemic throughput has a baseline term that grows linearly with verification and prevalence, and an additional \emph{information-leverage} term that scales as $\sqrt{JKB}$, where $J$ summarizes screening quality. Thus, expanding cheap screening can nonlinearly amplify scarce verification, even when informative records are rare. We further show that this scaling is tight in a weak-screening limit, and that in the sparse-verification regime ($B \ll K$), substantial leverage requires heavy-tailed score distributions; for light-tailed scores the amplification is only logarithmic.
翻译:近年来,生成式与工具调用式人工智能系统能够以极低的边际成本生成大量候选结果,但其中仅有少量能被仔细核查。这造成了解码侧瓶颈:下游决策者必须在注意力稀缺的条件下,从大量公开记录中形成可靠的后验概率。我们通过注意力约束推断(ACI)形式化这一机制:在廉价的筛选阶段处理 $K$ 条记录,而昂贵的验证阶段最多能跟进其中的 $B$ 条。基于贝叶斯对数损失,我们研究了每个时间窗口内可实现的**后验不确定性最大减少量**,并将其定义为**认知吞吐率**。我们的主要结论呈现为“JaKoB”标度律:认知吞吐率包含一个随验证规模与事件发生率线性增长的基线项,以及一个额外的**信息杠杆项**,该项按 $\sqrt{JKB}$ 标度变化,其中 $J$ 表征筛选质量。因此,即使有效信息记录稀少,扩展廉价筛选仍能以非线性方式放大稀缺验证资源的效果。我们进一步证明该标度在弱筛选极限下是紧的,且在稀疏验证机制($B \ll K$)中,显著的杠杆效应要求得分分布具有重尾特征;对于轻尾得分分布,其放大效应仅为对数级。