\textbf{Background:} Regulatory frameworks for AI in healthcare, including the EU AI Act and FDA guidance on AI/ML-based medical devices, require clinical decision support to demonstrate not only accuracy but auditability. Existing formal languages for clinical logic validate syntactic and structural correctness but not whether decision rules use epistemologically appropriate evidence. \textbf{Methods:} Drawing on design-by-contract principles, we introduce meta-predicates -- predicates about predicates -- for asserting epistemological constraints on clinical decision rules expressed in a DSL. An epistemological type system classifies annotations along four dimensions: purpose, knowledge domain, scale, and method of acquisition. Meta-predicates assert which evidence types are permissible in any given rule. The framework is instantiated in AnFiSA, an open-source platform for genetic variant curation, and demonstrated using the Brigham Genomics Medicine protocol on 5.6 million variants from the Genome in a Bottle benchmark. \textbf{Results:} Decision trees used in variant interpretation can be reformulated as unate cascades, enabling per-variant audit trails that identify which rule classified each variant and why. Meta-predicate validation catches epistemological errors before deployment, whether rules are human-written or AI-generated. The approach complements post-hoc methods such as LIME and SHAP: where explanation reveals what evidence was used after the fact, meta-predicates constrain what evidence may be used before deployment, while preserving human readability. \textbf{Conclusions:} Meta-predicate validation is a step toward demonstrating not only that decisions are accurate but that they rest on appropriate evidence in ways that can be independently audited. While demonstrated in genomics, the approach generalises to any domain requiring auditable decision logic.
翻译:\textbf{背景:} 针对医疗领域人工智能的监管框架(包括欧盟《人工智能法案》及FDA关于基于AI/ML的医疗器械的指导意见)要求临床决策支持不仅要证明其准确性,还需具备可审计性。现有临床逻辑形式化语言仅验证句法和结构正确性,却未能验证决策规则是否使用了符合认识论规范的证据。\textbf{方法:} 借鉴契约式设计原则,我们引入元谓词(关于谓词的谓词),用于对以DSL表达的临床决策规则施加认识论约束。认识论类型系统沿四个维度对注释进行分类:目的、知识领域、规模与获取方法。元谓词限定任何给定规则中允许使用的证据类型。该框架已在开源基因变异筛选平台AnFiSA中得到实例化,并通过布里格姆基因组医学方案对"瓶中基因组"基准中的560万个变异进行了验证。\textbf{结果:} 变异解读中使用的决策树可重构为单一阶跃级联结构,从而为每个变异生成审计追踪,明确标识每个变异的分类规则及其依据。元谓词验证可在部署前捕获认识论错误,无论规则由人类编写还是AI生成。该方法补充了LIME和SHAP等事后解释方法:说明机制揭示事后使用的证据,而元谓词则约束事前允许使用的证据,同时保持人类可读性。\textbf{结论:} 元谓词验证不仅证明决策的准确性,更证明决策基于可独立审计的适当证据。尽管在基因组学领域演示,该方法可推广至任何需要可审计决策逻辑的领域。