Evaluation is critical to advance decision making across domains, yet existing methodologies often struggle to balance theoretical rigor and practical scalability. In order to reduce the cost of experimental evaluation, we introduce a computational theory of evaluation for parameterisable subjects. We prove upper bounds of generalized evaluation error and generalized causal effect error of evaluation metric on subject. We also prove efficiency, and consistency to estimated causal effect of subject on metric by prediction. To optimize evaluation models, we propose a meta-learner to handle heterogeneous evaluation subjects space. Comparing with other computational approaches, our (conditional) evaluation model reduced 24.1%-99.0% evaluation errors across 12 scenes, including individual medicine, scientific simulation, business activities, and quantum trade. The evaluation time is reduced 3-7 order of magnitude comparing with experiments or simulations.
翻译:评估对于推动各领域决策制定至关重要,然而现有方法往往难以在理论严谨性与实践可扩展性之间取得平衡。为降低实验评估成本,本文提出针对参数化对象的计算评估理论。我们证明了评估指标在对象上的广义评估误差与广义因果效应误差的上界。同时通过预测验证了评估效率及其与对象在指标上估计因果效应的一致性。为优化评估模型,我们提出一种元学习器以处理异构评估对象空间。相较于其他计算方法,我们提出的(条件)评估模型在12个场景中(包括个体医疗、科学模拟、商业活动与量子交易)将评估误差降低了24.1%-99.0%。与实验或模拟方法相比,评估时间缩短了3-7个数量级。