We propose a distributional framework for benchmarking socio-technical risks of foundation models with quantified statistical significance. Our approach hinges on a new statistical relative testing based on first and second order stochastic dominance of real random variables. We show that the second order statistics in this test are linked to mean-risk models commonly used in econometrics and mathematical finance to balance risk and utility when choosing between alternatives. Using this framework, we formally develop a risk-aware approach for foundation model selection given guardrails quantified by specified metrics. Inspired by portfolio optimization and selection theory in mathematical finance, we define a metrics portfolio for each model as a means to aggregate a collection of metrics, and perform model selection based on the stochastic dominance of these portfolios. The statistical significance of our tests is backed theoretically by an asymptotic analysis via central limit theorems instantiated in practice via a bootstrap variance estimate. We use our framework to compare various large language models regarding risks related to drifting from instructions and outputting toxic content.
翻译:我们提出了一个分布化框架,用于以量化统计显著性的方式基准测试基础模型的社会技术风险。该方法的核心在于一种基于实随机变量一阶与二阶随机占优的新型统计相对性检验。我们证明了该检验中的二阶统计量与计量经济学和数理金融中常用的均值-风险模型相关联,这些模型用于在备选方案间权衡风险与效用。利用此框架,我们正式发展了一种风险感知的基础模型选择方法,其约束条件由指定指标量化。受数理金融中投资组合优化与选择理论的启发,我们将每个模型的指标组合定义为一个聚合多指标的工具,并基于这些组合的随机占优性进行模型选择。我们检验的统计显著性在理论上由中心极限定理的渐近分析支持,并在实践中通过自助法方差估计实现。我们应用该框架比较了多种大型语言模型在指令偏离和输出有害内容方面的风险。