In clinical machine learning, the coexistence of multiple models with comparable performance (a manifestation of the Rashomon Effect) poses fundamental challenges for trustworthy deployment and evaluation. Small, imbalanced, and noisy datasets, coupled with high-dimensional and weakly identified clinical features, amplify this multiplicity and make conventional validation schemes unreliable. As a result, selecting among equally performing models becomes uncertain, particularly when resource constraints and operational priorities are not considered by conventional metrics like F1 score. To address these issues, we propose two complementary tools for robust model assessment and selection: Intervention Efficiency (IE) and the Perturbation Validation Framework (PVF). IE is a capacity-aware metric that quantifies how efficiently a model identifies actionable true positives when only limited interventions are feasible, thereby linking predictive performance with clinical utility. PVF introduces a structured approach to assess the stability of models under data perturbations, identifying models whose performance remains most invariant across noisy or shifted validation sets. Empirical results on synthetic and real-world healthcare datasets show that using these tools facilitates the selection of models that generalize more robustly and align with capacity constraints, offering a new direction for tackling the Rashomon Effect in clinical settings.
翻译:在临床机器学习中,多个性能相当的模型共存(即拉什蒙效应的体现)给可信部署与评估带来了根本性挑战。数据规模小、不平衡且含有噪声,加之临床特征具有高维性与弱识别性,这些因素共同放大了模型的多重性,使得传统验证方案不可靠。因此,在性能相近的模型中进行选择变得不确定,尤其当传统指标(如F1分数)未考虑资源限制与临床操作优先级时。为解决这些问题,我们提出了两种互补的鲁棒模型评估与选择工具:干预效率(IE)与扰动验证框架(PVF)。IE是一种容量感知指标,用于量化当仅能实施有限干预时模型识别可操作真阳性的效率,从而将预测性能与临床效用相连接。PVF则提供了一种结构化方法,用于评估模型在数据扰动下的稳定性,识别那些在噪声或分布偏移的验证集上性能保持最稳定的模型。在合成与真实医疗数据集上的实证结果表明,使用这些工具有助于选择泛化能力更强、更符合容量约束的模型,为应对临床环境中的拉什蒙效应提供了新方向。