There are concerns about the fairness of clinical prediction models. 'Fair' models are defined as those for which their performance or predictions are not inappropriately influenced by protected attributes such as ethnicity, gender, or socio-economic status. Researchers have raised concerns that current algorithmic fairness paradigms enforce strict egalitarianism in healthcare, levelling down the performance of models in higher-performing subgroups instead of improving it in lower-performing ones. We propose assessing the fairness of a prediction model by expanding the concept of net benefit, using it to quantify and compare the clinical impact of a model in different subgroups. We use this to explore how a model distributes benefit across a population, its impact on health inequalities, and its role in the achievement of health equity. We show how resource constraints might introduce necessary trade-offs between health equity and other objectives of healthcare systems. We showcase our proposed approach with the development of two clinical prediction models: 1) a prognostic type 2 diabetes model used by clinicians to enrol patients into a preventive care lifestyle intervention programme, and 2) a lung cancer screening algorithm used to allocate diagnostic scans across the population. This approach helps modellers better understand if a model upholds health equity by considering its performance in a clinical and social context.
翻译:当前临床预测模型的公平性引发广泛关注。"公平"模型被定义为那些其性能或预测不会受到种族、性别或社会经济地位等受保护属性不当影响的模型。研究者指出,现有算法公平性范式可能在医疗领域强制推行严格的平均主义,导致模型在高性能亚组中的表现被降低,而非提升低性能亚组的水平。我们提出通过扩展净获益概念来评估预测模型的公平性,利用该指标量化并比较模型在不同亚组中的临床影响。基于此方法,我们探讨模型如何在人群中分配获益、对健康不平等的影响及其在实现健康公平中的作用。我们论证了资源约束可能如何在健康公平与医疗系统其他目标之间引入必要的权衡。通过开发两个临床预测模型展示所提出的方法:1)供临床医生使用的2型糖尿病预后模型,用于将患者纳入预防性护理生活方式干预计划;2)用于在全人群中分配诊断扫描资源的肺癌筛查算法。该方法通过考量模型在临床和社会背景下的表现,帮助建模者更好地理解模型是否维护健康公平。