When building AI systems for decision support, one often encounters the phenomenon of predictive multiplicity: a single best model does not exist; instead, one can construct many models with similar overall accuracy that differ in their predictions for individual cases. Especially when decisions have a direct impact on humans, this can be highly unsatisfactory. For a person subject to high disagreement between models, one could as well have chosen a different model of similar overall accuracy that would have decided the person's case differently. We argue that this arbitrariness conflicts with the EU AI Act, which requires providers of high-risk AI systems to report performance not only at the dataset level but also for specific persons. The goal of this paper is to put predictive multiplicity in context with the EU AI Act's provisions on accuracy and to subsequently derive concrete suggestions on how to evaluate and report predictive multiplicity in practice. Specifically: (1) We argue that incorporating information about predictive multiplicity can serve compliance with the EU AI Act's accuracy provisions for providers. (2) Based on this legal analysis, we suggest individual conflict ratios and $δ$-ambiguity as tools to quantify the disagreement between models on individual cases and to help detect individuals subject to conflicting predictions. (3) Based on computational insights, we derive easy-to-implement rules on how model providers could evaluate predictive multiplicity in practice. (4) Ultimately, we suggest that information about predictive multiplicity should be made available to deployers under the AI Act, enabling them to judge whether system outputs for specific individuals are reliable enough for their use case.
翻译:在构建用于决策支持的人工智能系统时,人们常会遇到预测多样性现象:并不存在单一的最佳模型;相反,可以构建许多总体准确率相近但针对个体案例的预测结果各不相同的模型。当决策直接影响人类时,这种情况尤其令人难以满意。对于模型间存在高度分歧的个体而言,完全可以选择另一个总体准确率相近但对该个体案例作出不同决策的模型。我们认为,这种任意性与欧盟《人工智能法案》相冲突,该法案要求高风险人工智能系统的提供者不仅要在数据集层面报告性能,还需针对特定个体进行报告。本文的目标是将预测多样性与《人工智能法案》中关于准确性的规定相结合,进而提出在实践中评估和报告预测多样性的具体建议。具体而言:(1)我们认为,纳入预测多样性信息有助于提供者遵守《人工智能法案》的准确性规定。(2)基于此法律分析,我们建议采用个体冲突比率和$δ$-模糊性作为量化模型在个体案例上分歧程度的工具,以帮助检测面临矛盾预测的个体。(3)基于计算分析,我们推导出易于实施的规则,指导模型提供者如何在实践中评估预测多样性。(4)最终,我们建议应根据《人工智能法案》向部署者提供预测多样性信息,使其能够判断系统针对特定个体的输出是否足够可靠以适用于其具体场景。