Equity in AI for healthcare is crucial due to its direct impact on human well-being. Despite advancements in 2D medical imaging fairness, the fairness of 3D models remains underexplored, hindered by the small sizes of 3D fairness datasets. Since 3D imaging surpasses 2D imaging in SOTA clinical care, it is critical to understand the fairness of these 3D models. To address this research gap, we conduct the first comprehensive study on the fairness of 3D medical imaging models across multiple protected attributes. Our investigation spans both 2D and 3D models and evaluates fairness across five architectures on three common eye diseases, revealing significant biases across race, gender, and ethnicity. To alleviate these biases, we propose a novel fair identity scaling (FIS) method that improves both overall performance and fairness, outperforming various SOTA fairness methods. Moreover, we release Harvard-FairVision, the first large-scale medical fairness dataset with 30,000 subjects featuring both 2D and 3D imaging data and six demographic identity attributes. Harvard-FairVision provides labels for three major eye disorders affecting about 380 million people worldwide, serving as a valuable resource for both 2D and 3D fairness learning. Our code and dataset are publicly accessible at \url{https://ophai.hms.harvard.edu/datasets/harvard-fairvision30k}.
翻译:人工智能在医疗领域的公平性至关重要,因其直接影响人类福祉。尽管二维医学影像公平性研究取得进展,三维模型的公平性仍未被充分探索,这受限于三维公平性数据集规模较小。由于三维成像在尖端临床护理中超越二维成像,理解三维模型的公平性尤为关键。为填补这一研究空白,我们首次系统研究了多保护属性下三维医学影像模型的公平性。我们的调查涵盖二维和三维模型,在五种架构上针对三种常见眼科疾病评估公平性,揭示了种族、性别和民族间的显著偏差。为缓解这些偏差,我们提出了一种新颖的公平身份缩放(FIS)方法,同时提升了整体性能和公平性,优于多种顶尖公平性方法。此外,我们发布了哈佛-公平视界(Harvard-FairVision)——首个包含30000名受试者的大规模医学公平性基准数据集,涵盖二维和三维影像数据以及六种人口统计身份属性。该数据集为影响全球约3.8亿人的三种主要眼疾提供标注,成为二维和三维公平性学习的宝贵资源。我们的代码和数据集已在公开访问:\url{https://ophai.hms.harvard.edu/datasets/harvard-fairvision30k}。