The application of deep learning methods, particularly foundation models, in biological research has surged in recent years. These models can be text-based or trained on underlying biological data, especially omics data of various types. However, comparing the performance of these models consistently has proven to be a challenge due to differences in training data and downstream tasks. To tackle this problem, we developed an architecture-agnostic benchmarking approach that, instead of evaluating the models directly, leverages entity representation vectors from each model and trains simple predictive models for each benchmarking task. This ensures that all types of models are evaluated using the same input and output types. Here we focus on gene properties collected from professionally curated bioinformatics databases. These gene properties are categorized into five major groups: genomic properties, regulatory functions, localization, biological processes, and protein properties. Overall, we define hundreds of tasks based on these databases, which include binary, multi-label, and multi-class classification tasks. We apply these benchmark tasks to evaluate expression-based models, large language models, protein language models, DNA-based models, and traditional baselines. Our findings suggest that text-based models and protein language models generally outperform expression-based models in genomic properties and regulatory functions tasks, whereas expression-based models demonstrate superior performance in localization tasks. These results should aid in the development of more informed artificial intelligence strategies for biological understanding and therapeutic discovery. To ensure the reproducibility and transparency of our findings, we have made the source code and benchmark data publicly accessible for further investigation and expansion at github.com/BiomedSciAI/gene-benchmark.
翻译:近年来,深度学习模型,尤其是基础模型,在生物学研究中的应用急剧增长。这些模型可以是基于文本的,也可以基于底层生物数据(特别是各类组学数据)进行训练。然而,由于训练数据和下游任务的差异,对这些模型的性能进行一致比较一直是个挑战。为解决此问题,我们开发了一种与架构无关的基准测试方法。该方法不直接评估模型,而是利用每个模型的实体表示向量,并为每个基准测试任务训练简单的预测模型。这确保了所有类型的模型都使用相同的输入和输出类型进行评估。本文重点关注从专业管理的生物信息学数据库中收集的基因属性。这些基因属性分为五大类:基因组属性、调控功能、亚细胞定位、生物过程以及蛋白质属性。总体而言,我们基于这些数据库定义了数百个任务,包括二分类、多标签分类和多类别分类任务。我们应用这些基准任务来评估基于表达的模型、大型语言模型、蛋白质语言模型、基于DNA的模型以及传统基线模型。我们的研究结果表明,在基因组属性和调控功能任务上,基于文本的模型和蛋白质语言模型通常优于基于表达的模型;而在亚细胞定位任务上,基于表达的模型则表现出更优的性能。这些结果应有助于为生物学理解和治疗发现制定更明智的人工智能策略。为确保研究结果的可复现性和透明度,我们已将源代码和基准数据公开,以便进一步研究和扩展,访问地址为 github.com/BiomedSciAI/gene-benchmark。