As machine learning models become increasingly integrated into healthcare, structural inequities and social biases embedded in clinical data can be perpetuated or even amplified by data-driven models. In survival analysis, censoring and time dynamics can further add complexity to fair model development. Additionally, algorithmic fairness approaches often overlook disparities in cross-group rankings, e.g., high-risk Black patients may be ranked below lower-risk White patients who do not experience the event of mortality. Such misranking can reinforce biological essentialism and undermine equitable care. We propose a Fairness-Aware Survival Modeling (FASM), designed to mitigate algorithmic bias regarding both intra-group and cross-group risk rankings over time. Using breast cancer prognosis as a representative case and applying FASM to SEER breast cancer data, we show that FASM substantially improves fairness while preserving discrimination performance comparable to fairness-unaware survival models. Time-stratified evaluations show that FASM maintains stable fairness over a 10-year horizon, with the greatest improvements observed during the mid-term of follow-up. Our approach enables the development of survival models that prioritize both accuracy and equity in clinical decision-making, advancing fairness as a core principle in clinical care.
翻译:随着机器学习模型日益融入医疗保健领域,临床数据中固有的结构性不平等和社会偏见可能被数据驱动模型延续甚至放大。在生存分析中,删失和时间动态性进一步增加了公平模型开发的复杂性。此外,算法公平性方法常常忽视跨群体排序中的差异,例如高风险黑人患者可能被排在未经历死亡事件的低风险白人患者之后。这种错误排序可能强化生物本质主义并损害公平护理。我们提出了一种公平感知生存建模(FASM)方法,旨在减轻随时间推移在组内和跨群体风险排序方面的算法偏差。以乳腺癌预后为代表案例,并将FASM应用于SEER乳腺癌数据,我们证明FASM在保持与未考虑公平性的生存模型相当的判别性能的同时,显著提升了公平性。时间分层评估表明,FASM在10年时间范围内保持稳定的公平性,其中在随访中期观察到最大的改进。我们的方法使得开发既注重准确性又兼顾公平性的生存模型成为可能,从而将公平性提升为临床护理的核心原则。