We present SynCABEL (Synthetic Contextualized Augmentation for Biomedical Entity Linking), a framework that addresses a central bottleneck in supervised biomedical entity linking (BEL): the scarcity of expert-annotated training data. SynCABEL leverages large language models to generate context-rich synthetic training examples for all candidate concepts in a target knowledge base, providing broad supervision without manual annotation. We demonstrate that SynCABEL, when combined with decoder-only models and guided inference establish new state-of-the-art results across three widely used multilingual benchmarks: MedMentions for English, QUAERO for French, and SPACCC for Spanish. Evaluating data efficiency, we show that SynCABEL reaches the performance of full human supervision using up to 60% less annotated data, substantially reducing reliance on labor-intensive and costly expert labeling. Finally, acknowledging that standard evaluation based on exact code matching often underestimates clinically valid predictions due to ontology redundancy, we introduce an LLM-as-a-judge protocol. This analysis reveals that SynCABEL significantly improves the rate of clinically valid predictions. Our synthetic datasets, models, and code are released to support reproducibility and future research.
翻译:我们提出了SynCABEL(用于生物医学实体链接的合成语境化增强框架),该框架旨在解决监督式生物医学实体链接中的一个核心瓶颈:专家标注训练数据的稀缺性。SynCABEL利用大语言模型为目标知识库中的所有候选概念生成语境丰富的合成训练示例,从而无需人工标注即可提供广泛的监督。我们证明,当SynCABEL与仅解码器模型和引导式推理结合使用时,在三个广泛使用的多语言基准测试中均取得了新的最先进成果:英语的MedMentions、法语的QUAERO和西班牙语的SPACCC。在数据效率评估方面,我们表明SynCABEL使用最多减少60%的标注数据即可达到完全人工监督的性能,从而显著降低了对劳动密集型且成本高昂的专家标注的依赖。最后,我们认识到基于精确代码匹配的标准评估方法常因本体冗余而低估临床有效的预测,因此引入了LLM-as-a-judge评估协议。该分析表明,SynCABEL显著提高了临床有效预测的比例。我们发布了合成数据集、模型和代码,以支持可重复性和未来研究。