Predictive inference requires balancing statistical accuracy against informational complexity, yet the choice of complexity measure is usually imposed rather than derived. We treat econometric objects as predictive rules, mappings from information to reported predictive distributions, and impose three structural requirements on evaluation: locality, strict propriety, and coherence under aggregation (coarsening/refinement) of outcome categories. These axioms characterize (uniquely, up to affine transformations) the logarithmic score and induce Shannon mutual information (Kullback-Leibler divergence) as the corresponding measure of predictive complexity. The resulting entropy-regularized prediction problem admits Gibbs-form optimal rules, and we establish an essentially complete-class result for the admissible rules we study under joint risk-complexity dominance. Rational inattention emerges as the constrained dual, corresponding to frontier points with binding information capacity. The entropy penalty contributes additive curvature to the predictive criterion; in weakly identified settings, such as weak instruments in IV regression, where the unregularized objective is flat, this curvature stabilizes the predictive criterion. We derive a local quadratic (LAQ) expansion connecting entropy regularization to classical weak-identification diagnostics.
翻译:预测性推断需要在统计准确性与信息复杂性之间取得平衡,然而复杂性度量的选择通常是外部强加的而非推导得出的。我们将计量经济学对象视为预测规则——即从信息到报告预测分布的映射,并对评估施加三个结构性要求:局部性、严格适当性以及在结果类别聚合(粗化/细化)下的协调性。这些公理(在仿射变换意义下唯一地)刻画了对数评分函数,并导出香农互信息(Kullback-Leibler散度)作为相应的预测复杂性度量。由此产生的熵正则化预测问题允许吉布斯形式的最优规则,并且针对我们在联合风险-复杂性占优下研究的可容许规则,我们建立了一个本质上的完备类结果。理性疏忽作为约束对偶问题出现,对应于具有紧信息容量的前沿点。熵惩罚项为预测准则贡献了加性曲率;在弱识别设定中(例如IV回归中的弱工具变量),未正则化的目标函数是平坦的,这种曲率能够稳定预测准则。我们推导了一个局部二次(LAQ)展开,将熵正则化与经典的弱识别诊断方法联系起来。