Deep learning based diagnostic AI systems based on medical images are starting to provide similar performance as human experts. However these data hungry complex systems are inherently black boxes and therefore slow to be adopted for high risk applications like healthcare. This problem of lack of transparency is exacerbated in the case of recent large foundation models, which are trained in a self supervised manner on millions of data points to provide robust generalisation across a range of downstream tasks, but the embeddings generated from them happen through a process that is not interpretable, and hence not easily trustable for clinical applications. To address this timely issue, we deploy conformal analysis to quantify the predictive uncertainty of a vision transformer (ViT) based foundation model across patient demographics with respect to sex, age and ethnicity for the tasks of skin lesion classification using several public benchmark datasets. The significant advantage of this method is that conformal analysis is method independent and it not only provides a coverage guarantee at population level but also provides an uncertainty score for each individual. We used a model-agnostic dynamic F1-score-based sampling during model training, which helped to stabilize the class imbalance and we investigate the effects on uncertainty quantification (UQ) with or without this bias mitigation step. Thus we show how this can be used as a fairness metric to evaluate the robustness of the feature embeddings of the foundation model (Google DermFoundation) and thus advance the trustworthiness and fairness of clinical AI.
翻译:基于医学图像的深度学习诊断AI系统已开始展现出与人类专家相当的性能。然而,这些数据驱动型复杂系统本质上是黑箱模型,因此在医疗等高风险应用中的采纳进程缓慢。对于近期出现的大型基础模型,这种缺乏透明度的问题尤为突出:此类模型通过自监督方式在数百万数据点上进行训练,以在一系列下游任务中实现鲁棒的泛化能力,但其生成嵌入表示的过程不可解释,因而难以在临床应用中建立信任。为应对这一紧迫问题,我们采用保形分析来量化基于视觉Transformer(ViT)的基础模型在皮肤病灶分类任务中的预测不确定性,并针对性别、年龄和种族等患者人口统计学特征,利用多个公共基准数据集进行评估。该方法的重要优势在于保形分析与具体方法无关,不仅能在群体层面提供覆盖保证,还能为每个个体提供不确定性评分。我们在模型训练中采用了与模型无关的动态F1分数采样策略,以缓解类别不平衡问题,并研究了是否采用这种偏差缓解步骤对不确定性量化(UQ)的影响。由此我们证明该方法可作为公平性度量标准,用于评估基础模型(Google DermFoundation)特征嵌入的鲁棒性,从而提升临床AI的可信度与公平性。