Modern clinical practice relies on evidence-based guidelines implemented as compact scoring systems composed of a small number of interpretable decision rules. While machine-learning models achieve strong performance, many fail to translate into routine clinical use due to misalignment with workflow constraints such as memorability, auditability, and bedside execution. We argue that this gap arises not from insufficient predictive power, but from optimizing over model classes that are incompatible with guideline deployment. Deployable guidelines often take the form of unit-weighted clinical checklists, formed by thresholding the sum of binary rules, but learning such scores requires searching an exponentially large discrete space of possible rule sets. We introduce AgentScore, which performs semantically guided optimization in this space by using LLMs to propose candidate rules and a deterministic, data-grounded verification-and-selection loop to enforce statistical validity and deployability constraints. Across eight clinical prediction tasks, AgentScore outperforms existing score-generation methods and achieves AUC comparable to more flexible interpretable models despite operating under stronger structural constraints. On two additional externally validated tasks, AgentScore achieves higher discrimination than established guideline-based scores.
翻译:现代临床实践依赖于基于证据的指南,这些指南以紧凑的评分系统形式实现,由少量可解释的决策规则组成。尽管机器学习模型实现了强大的性能,但由于与工作流程约束(如可记忆性、可审计性和床旁执行性)不匹配,许多模型未能转化为常规临床使用。我们认为,这一差距并非源于预测能力不足,而是源于对与指南部署不兼容的模型类别进行了优化。可部署的指南通常采用单位权重临床检查表的形式,通过对二元规则之和进行阈值化来构建,但学习此类评分需要搜索一个指数级庞大的可能规则集的离散空间。我们提出了AgentScore,它通过使用LLM提出候选规则,并采用一个确定性的、基于数据的验证与选择循环来强制执行统计有效性和可部署性约束,从而在该空间中进行语义引导的优化。在八项临床预测任务中,AgentScore优于现有的评分生成方法,并在更强的结构约束下,实现了与更灵活的可解释模型相当的AUC。在另外两项经过外部验证的任务中,AgentScore实现了比既定基于指南的评分更高的区分度。